00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2435 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3696 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.200 Using shallow fetch with depth 1 00:00:00.200 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.200 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.273 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.273 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.480 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.491 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.503 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.503 > git config core.sparsecheckout # timeout=10 00:00:06.513 > git read-tree -mu HEAD # timeout=10 00:00:06.528 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.548 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.548 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.634 [Pipeline] Start of Pipeline 00:00:06.652 [Pipeline] library 00:00:06.654 Loading library shm_lib@master 00:00:08.738 Library shm_lib@master is cached. Copying from home. 00:00:08.775 [Pipeline] node 00:00:08.866 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.871 [Pipeline] { 00:00:08.880 [Pipeline] catchError 00:00:08.881 [Pipeline] { 00:00:08.890 [Pipeline] wrap 00:00:08.899 [Pipeline] { 00:00:08.904 [Pipeline] stage 00:00:08.906 [Pipeline] { (Prologue) 00:00:08.920 [Pipeline] echo 00:00:08.921 Node: VM-host-SM38 00:00:08.926 [Pipeline] cleanWs 00:00:08.936 [WS-CLEANUP] Deleting project workspace... 00:00:08.936 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.943 [WS-CLEANUP] done 00:00:09.238 [Pipeline] setCustomBuildProperty 00:00:09.306 [Pipeline] httpRequest 00:00:09.641 [Pipeline] echo 00:00:09.643 Sorcerer 10.211.164.20 is alive 00:00:09.652 [Pipeline] retry 00:00:09.654 [Pipeline] { 00:00:09.668 [Pipeline] httpRequest 00:00:09.672 HttpMethod: GET 00:00:09.672 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.673 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.674 Response Code: HTTP/1.1 200 OK 00:00:09.675 Success: Status code 200 is in the accepted range: 200,404 00:00:09.675 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.821 [Pipeline] } 00:00:09.839 [Pipeline] // retry 00:00:09.846 [Pipeline] sh 00:00:10.126 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.144 [Pipeline] httpRequest 00:00:10.437 [Pipeline] echo 00:00:10.438 Sorcerer 10.211.164.20 is alive 00:00:10.450 [Pipeline] retry 00:00:10.452 [Pipeline] { 00:00:10.468 [Pipeline] httpRequest 00:00:10.474 HttpMethod: GET 00:00:10.474 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.475 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.476 Response Code: HTTP/1.1 200 OK 00:00:10.476 Success: Status code 200 is in the accepted range: 200,404 00:00:10.477 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:27.051 [Pipeline] } 00:00:27.074 [Pipeline] // retry 00:00:27.083 [Pipeline] sh 00:00:27.374 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.929 [Pipeline] sh 00:00:30.217 + git -C spdk log --oneline -n5 00:00:30.217 c13c99a5e test: Various fixes for Fedora40 00:00:30.217 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:30.217 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:30.217 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:30.217 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:30.239 [Pipeline] writeFile 00:00:30.256 [Pipeline] sh 00:00:30.549 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.564 [Pipeline] sh 00:00:30.850 + cat autorun-spdk.conf 00:00:30.850 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.850 SPDK_TEST_NVME=1 00:00:30.850 SPDK_TEST_FTL=1 00:00:30.850 SPDK_TEST_ISAL=1 00:00:30.850 SPDK_RUN_ASAN=1 00:00:30.850 SPDK_RUN_UBSAN=1 00:00:30.850 SPDK_TEST_XNVME=1 00:00:30.850 SPDK_TEST_NVME_FDP=1 00:00:30.850 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.859 RUN_NIGHTLY=1 00:00:30.861 [Pipeline] } 00:00:30.875 [Pipeline] // stage 00:00:30.891 [Pipeline] stage 00:00:30.892 [Pipeline] { (Run VM) 00:00:30.903 [Pipeline] sh 00:00:31.191 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.191 + echo 'Start stage prepare_nvme.sh' 00:00:31.191 Start stage prepare_nvme.sh 00:00:31.191 + [[ -n 7 ]] 00:00:31.191 + disk_prefix=ex7 00:00:31.191 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:31.191 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:31.191 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:31.191 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.191 ++ SPDK_TEST_NVME=1 00:00:31.191 ++ SPDK_TEST_FTL=1 00:00:31.191 ++ SPDK_TEST_ISAL=1 00:00:31.191 ++ SPDK_RUN_ASAN=1 00:00:31.191 ++ SPDK_RUN_UBSAN=1 00:00:31.191 ++ SPDK_TEST_XNVME=1 00:00:31.191 ++ SPDK_TEST_NVME_FDP=1 00:00:31.191 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.191 ++ RUN_NIGHTLY=1 00:00:31.191 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:31.191 + nvme_files=() 00:00:31.191 + declare -A nvme_files 00:00:31.191 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.191 + nvme_files['nvme.img']=5G 00:00:31.191 + nvme_files['nvme-cmb.img']=5G 00:00:31.191 + nvme_files['nvme-multi0.img']=4G 00:00:31.192 + nvme_files['nvme-multi1.img']=4G 00:00:31.192 + nvme_files['nvme-multi2.img']=4G 00:00:31.192 + nvme_files['nvme-openstack.img']=8G 00:00:31.192 + nvme_files['nvme-zns.img']=5G 00:00:31.192 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.192 + (( SPDK_TEST_FTL == 1 )) 00:00:31.192 + nvme_files["nvme-ftl.img"]=6G 00:00:31.192 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.192 + nvme_files["nvme-fdp.img"]=1G 00:00:31.192 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.192 + for nvme in "${!nvme_files[@]}" 00:00:31.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:31.192 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.192 + for nvme in "${!nvme_files[@]}" 00:00:31.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:00:31.192 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:31.192 + for nvme in "${!nvme_files[@]}" 00:00:31.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:31.455 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.455 + for nvme in "${!nvme_files[@]}" 00:00:31.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:31.455 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.455 + for nvme in "${!nvme_files[@]}" 00:00:31.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:31.455 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.455 + for nvme in "${!nvme_files[@]}" 00:00:31.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:31.455 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.455 + for nvme in "${!nvme_files[@]}" 00:00:31.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:31.455 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.455 + for nvme in "${!nvme_files[@]}" 00:00:31.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:00:31.717 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:31.717 + for nvme in "${!nvme_files[@]}" 00:00:31.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:31.717 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.717 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:31.717 + echo 'End stage prepare_nvme.sh' 00:00:31.717 End stage prepare_nvme.sh 00:00:31.731 [Pipeline] sh 00:00:32.017 + DISTRO=fedora39 00:00:32.017 + CPUS=10 00:00:32.017 + RAM=12288 00:00:32.017 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.017 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:32.017 00:00:32.017 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:32.017 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:32.017 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:32.017 HELP=0 00:00:32.017 DRY_RUN=0 00:00:32.017 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:00:32.017 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:32.017 NVME_AUTO_CREATE=0 00:00:32.017 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:00:32.017 NVME_CMB=,,,, 00:00:32.017 NVME_PMR=,,,, 00:00:32.017 NVME_ZNS=,,,, 00:00:32.017 NVME_MS=true,,,, 00:00:32.017 NVME_FDP=,,,on, 00:00:32.017 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.017 SPDK_VAGRANT_VMCPU=10 00:00:32.017 SPDK_VAGRANT_VMRAM=12288 00:00:32.017 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.017 SPDK_VAGRANT_HTTP_PROXY= 00:00:32.017 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.017 SPDK_OPENSTACK_NETWORK=0 00:00:32.017 VAGRANT_PACKAGE_BOX=0 00:00:32.017 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:32.017 FORCE_DISTRO=true 00:00:32.018 VAGRANT_BOX_VERSION= 00:00:32.018 EXTRA_VAGRANTFILES= 00:00:32.018 NIC_MODEL=e1000 00:00:32.018 00:00:32.018 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:32.018 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:34.567 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.827 ==> default: Creating image (snapshot of base box volume). 00:00:35.088 ==> default: Creating domain with the following settings... 00:00:35.088 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733320896_cc0b840b7e01c1af31e6 00:00:35.088 ==> default: -- Domain type: kvm 00:00:35.088 ==> default: -- Cpus: 10 00:00:35.088 ==> default: -- Feature: acpi 00:00:35.088 ==> default: -- Feature: apic 00:00:35.088 ==> default: -- Feature: pae 00:00:35.088 ==> default: -- Memory: 12288M 00:00:35.088 ==> default: -- Memory Backing: hugepages: 00:00:35.088 ==> default: -- Management MAC: 00:00:35.088 ==> default: -- Loader: 00:00:35.088 ==> default: -- Nvram: 00:00:35.088 ==> default: -- Base box: spdk/fedora39 00:00:35.088 ==> default: -- Storage pool: default 00:00:35.088 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733320896_cc0b840b7e01c1af31e6.img (20G) 00:00:35.088 ==> default: -- Volume Cache: default 00:00:35.088 ==> default: -- Kernel: 00:00:35.088 ==> default: -- Initrd: 00:00:35.088 ==> default: -- Graphics Type: vnc 00:00:35.088 ==> default: -- Graphics Port: -1 00:00:35.088 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.088 ==> default: -- Graphics Password: Not defined 00:00:35.088 ==> default: -- Video Type: cirrus 00:00:35.088 ==> default: -- Video VRAM: 9216 00:00:35.088 ==> default: -- Sound Type: 00:00:35.088 ==> default: -- Keymap: en-us 00:00:35.088 ==> default: -- TPM Path: 00:00:35.088 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.088 ==> default: -- Command line args: 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:35.088 ==> default: -> value=-drive, 00:00:35.088 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:35.088 ==> default: -> value=-drive, 00:00:35.088 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:00:35.088 ==> default: -> value=-drive, 00:00:35.088 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.088 ==> default: -> value=-drive, 00:00:35.088 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.088 ==> default: -> value=-drive, 00:00:35.088 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:00:35.088 ==> default: -> value=-drive, 00:00:35.088 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:35.088 ==> default: -> value=-device, 00:00:35.088 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.088 ==> default: Creating shared folders metadata... 00:00:35.348 ==> default: Starting domain. 00:00:37.257 ==> default: Waiting for domain to get an IP address... 00:00:55.381 ==> default: Waiting for SSH to become available... 00:00:55.381 ==> default: Configuring and enabling network interfaces... 00:00:58.702 default: SSH address: 192.168.121.252:22 00:00:58.702 default: SSH username: vagrant 00:00:58.702 default: SSH auth method: private key 00:01:00.617 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.760 ==> default: Mounting SSHFS shared folder... 00:01:10.146 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:10.146 ==> default: Checking Mount.. 00:01:11.089 ==> default: Folder Successfully Mounted! 00:01:11.351 00:01:11.351 SUCCESS! 00:01:11.351 00:01:11.351 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:11.351 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.351 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:11.351 00:01:11.362 [Pipeline] } 00:01:11.378 [Pipeline] // stage 00:01:11.388 [Pipeline] dir 00:01:11.389 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:11.390 [Pipeline] { 00:01:11.404 [Pipeline] catchError 00:01:11.406 [Pipeline] { 00:01:11.419 [Pipeline] sh 00:01:11.705 + vagrant ssh-config --host vagrant 00:01:11.706 + sed -ne '/^Host/,$p' 00:01:11.706 + tee ssh_conf 00:01:14.259 Host vagrant 00:01:14.259 HostName 192.168.121.252 00:01:14.259 User vagrant 00:01:14.259 Port 22 00:01:14.259 UserKnownHostsFile /dev/null 00:01:14.259 StrictHostKeyChecking no 00:01:14.259 PasswordAuthentication no 00:01:14.259 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:14.259 IdentitiesOnly yes 00:01:14.259 LogLevel FATAL 00:01:14.259 ForwardAgent yes 00:01:14.259 ForwardX11 yes 00:01:14.259 00:01:14.273 [Pipeline] withEnv 00:01:14.275 [Pipeline] { 00:01:14.287 [Pipeline] sh 00:01:14.570 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:14.571 source /etc/os-release 00:01:14.571 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.571 # Minimal, systemd-like check. 00:01:14.571 if [[ -e /.dockerenv ]]; then 00:01:14.571 # Clear garbage from the node'\''s name: 00:01:14.571 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.571 # $HOSTNAME is the actual container id 00:01:14.571 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.571 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.571 # We can assume this is a mount from a host where container is running, 00:01:14.571 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.571 container="$(< /etc/hostname) ($agent)" 00:01:14.571 else 00:01:14.571 # Fallback 00:01:14.571 container=$agent 00:01:14.571 fi 00:01:14.571 fi 00:01:14.571 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.571 ' 00:01:14.844 [Pipeline] } 00:01:14.858 [Pipeline] // withEnv 00:01:14.864 [Pipeline] setCustomBuildProperty 00:01:14.873 [Pipeline] stage 00:01:14.874 [Pipeline] { (Tests) 00:01:14.885 [Pipeline] sh 00:01:15.164 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.544 [Pipeline] sh 00:01:15.832 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:16.111 [Pipeline] timeout 00:01:16.112 Timeout set to expire in 50 min 00:01:16.114 [Pipeline] { 00:01:16.129 [Pipeline] sh 00:01:16.415 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:16.986 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:17.000 [Pipeline] sh 00:01:17.285 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:17.561 [Pipeline] sh 00:01:17.847 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:18.128 [Pipeline] sh 00:01:18.417 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:18.678 ++ readlink -f spdk_repo 00:01:18.678 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.678 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.678 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.678 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.678 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.678 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.678 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.678 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:18.678 + cd /home/vagrant/spdk_repo 00:01:18.678 + source /etc/os-release 00:01:18.678 ++ NAME='Fedora Linux' 00:01:18.678 ++ VERSION='39 (Cloud Edition)' 00:01:18.678 ++ ID=fedora 00:01:18.678 ++ VERSION_ID=39 00:01:18.678 ++ VERSION_CODENAME= 00:01:18.678 ++ PLATFORM_ID=platform:f39 00:01:18.678 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.678 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.678 ++ LOGO=fedora-logo-icon 00:01:18.678 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.678 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.678 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.678 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.678 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.678 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.678 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.678 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.678 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.678 ++ SUPPORT_END=2024-11-12 00:01:18.678 ++ VARIANT='Cloud Edition' 00:01:18.678 ++ VARIANT_ID=cloud 00:01:18.678 + uname -a 00:01:18.678 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:18.678 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.678 Hugepages 00:01:18.678 node hugesize free / total 00:01:18.678 node0 1048576kB 0 / 0 00:01:18.678 node0 2048kB 0 / 0 00:01:18.678 00:01:18.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.678 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:18.678 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:18.678 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:18.678 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:18.941 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:18.941 + rm -f /tmp/spdk-ld-path 00:01:18.941 + source autorun-spdk.conf 00:01:18.941 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.941 ++ SPDK_TEST_NVME=1 00:01:18.941 ++ SPDK_TEST_FTL=1 00:01:18.941 ++ SPDK_TEST_ISAL=1 00:01:18.941 ++ SPDK_RUN_ASAN=1 00:01:18.941 ++ SPDK_RUN_UBSAN=1 00:01:18.941 ++ SPDK_TEST_XNVME=1 00:01:18.941 ++ SPDK_TEST_NVME_FDP=1 00:01:18.941 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.941 ++ RUN_NIGHTLY=1 00:01:18.941 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.941 + [[ -n '' ]] 00:01:18.941 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:18.941 + for M in /var/spdk/build-*-manifest.txt 00:01:18.941 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.941 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.941 + for M in /var/spdk/build-*-manifest.txt 00:01:18.941 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.941 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.941 + for M in /var/spdk/build-*-manifest.txt 00:01:18.941 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.941 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:18.941 ++ uname 00:01:18.941 + [[ Linux == \L\i\n\u\x ]] 00:01:18.941 + sudo dmesg -T 00:01:18.941 + sudo dmesg --clear 00:01:18.941 + dmesg_pid=4986 00:01:18.941 + [[ Fedora Linux == FreeBSD ]] 00:01:18.941 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.941 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.941 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.941 + sudo dmesg -Tw 00:01:18.941 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.941 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.941 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.941 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.941 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.941 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.941 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.941 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.941 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.941 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.941 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.941 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:18.941 Test configuration: 00:01:18.941 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.941 SPDK_TEST_NVME=1 00:01:18.941 SPDK_TEST_FTL=1 00:01:18.941 SPDK_TEST_ISAL=1 00:01:18.941 SPDK_RUN_ASAN=1 00:01:18.941 SPDK_RUN_UBSAN=1 00:01:18.941 SPDK_TEST_XNVME=1 00:01:18.941 SPDK_TEST_NVME_FDP=1 00:01:18.941 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.941 RUN_NIGHTLY=1 14:02:20 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:18.941 14:02:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:18.941 14:02:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.941 14:02:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.941 14:02:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.941 14:02:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.942 14:02:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.942 14:02:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.942 14:02:20 -- paths/export.sh@5 -- $ export PATH 00:01:18.942 14:02:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.942 14:02:20 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:18.942 14:02:20 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:18.942 14:02:20 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733320940.XXXXXX 00:01:18.942 14:02:20 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733320940.o4AO9I 00:01:18.942 14:02:20 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:18.942 14:02:20 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:18.942 14:02:20 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:18.942 14:02:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:18.942 14:02:20 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.942 14:02:20 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:18.942 14:02:20 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:18.942 14:02:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.942 14:02:20 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:18.942 14:02:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.942 14:02:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.942 14:02:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:18.942 14:02:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.942 Wed Dec 4 02:02:20 PM UTC 2024 00:01:18.942 14:02:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:19.204 LTS-67-gc13c99a5e 00:01:19.204 14:02:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:19.204 14:02:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:19.204 14:02:20 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:19.204 14:02:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:19.204 14:02:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.204 ************************************ 00:01:19.204 START TEST asan 00:01:19.204 ************************************ 00:01:19.204 using asan 00:01:19.204 14:02:20 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:19.204 00:01:19.204 real 0m0.000s 00:01:19.204 user 0m0.000s 00:01:19.204 sys 0m0.000s 00:01:19.204 ************************************ 00:01:19.204 END TEST asan 00:01:19.204 ************************************ 00:01:19.204 14:02:20 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:19.204 14:02:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.204 14:02:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:19.204 14:02:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:19.204 14:02:20 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:19.204 14:02:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:19.204 14:02:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.204 ************************************ 00:01:19.204 START TEST ubsan 00:01:19.204 ************************************ 00:01:19.204 using ubsan 00:01:19.204 14:02:20 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:19.204 00:01:19.204 real 0m0.000s 00:01:19.204 user 0m0.000s 00:01:19.204 sys 0m0.000s 00:01:19.204 ************************************ 00:01:19.204 END TEST ubsan 00:01:19.204 ************************************ 00:01:19.204 14:02:20 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:19.204 14:02:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.204 14:02:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:19.204 14:02:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:19.204 14:02:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:19.204 14:02:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:19.204 14:02:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:19.204 14:02:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:19.204 14:02:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:19.204 14:02:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:19.204 14:02:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:19.204 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:19.204 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:19.777 Using 'verbs' RDMA provider 00:01:32.592 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:42.594 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:42.594 Creating mk/config.mk...done. 00:01:42.594 Creating mk/cc.flags.mk...done. 00:01:42.594 Type 'make' to build. 00:01:42.594 14:02:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:42.594 14:02:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:42.594 14:02:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:42.594 14:02:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.594 ************************************ 00:01:42.594 START TEST make 00:01:42.594 ************************************ 00:01:42.594 14:02:42 -- common/autotest_common.sh@1114 -- $ make -j10 00:01:42.594 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:42.594 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:42.594 meson setup builddir \ 00:01:42.594 -Dwith-libaio=enabled \ 00:01:42.594 -Dwith-liburing=enabled \ 00:01:42.594 -Dwith-libvfn=disabled \ 00:01:42.594 -Dwith-spdk=false && \ 00:01:42.594 meson compile -C builddir && \ 00:01:42.594 cd -) 00:01:42.594 make[1]: Nothing to be done for 'all'. 00:01:44.504 The Meson build system 00:01:44.504 Version: 1.5.0 00:01:44.504 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:44.504 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:44.504 Build type: native build 00:01:44.504 Project name: xnvme 00:01:44.504 Project version: 0.7.3 00:01:44.504 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:44.504 C linker for the host machine: cc ld.bfd 2.40-14 00:01:44.504 Host machine cpu family: x86_64 00:01:44.504 Host machine cpu: x86_64 00:01:44.504 Message: host_machine.system: linux 00:01:44.504 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:44.504 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:44.504 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:44.505 Run-time dependency threads found: YES 00:01:44.505 Has header "setupapi.h" : NO 00:01:44.505 Has header "linux/blkzoned.h" : YES 00:01:44.505 Has header "linux/blkzoned.h" : YES (cached) 00:01:44.505 Has header "libaio.h" : YES 00:01:44.505 Library aio found: YES 00:01:44.505 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:44.505 Run-time dependency liburing found: YES 2.2 00:01:44.505 Dependency libvfn skipped: feature with-libvfn disabled 00:01:44.505 Run-time dependency appleframeworks found: NO (tried framework) 00:01:44.505 Run-time dependency appleframeworks found: NO (tried framework) 00:01:44.505 Configuring xnvme_config.h using configuration 00:01:44.505 Configuring xnvme.spec using configuration 00:01:44.505 Run-time dependency bash-completion found: YES 2.11 00:01:44.505 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:44.505 Program cp found: YES (/usr/bin/cp) 00:01:44.505 Has header "winsock2.h" : NO 00:01:44.505 Has header "dbghelp.h" : NO 00:01:44.505 Library rpcrt4 found: NO 00:01:44.505 Library rt found: YES 00:01:44.505 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:44.505 Found CMake: /usr/bin/cmake (3.27.7) 00:01:44.505 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:44.505 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:44.505 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:44.505 Build targets in project: 32 00:01:44.505 00:01:44.505 xnvme 0.7.3 00:01:44.505 00:01:44.505 User defined options 00:01:44.505 with-libaio : enabled 00:01:44.505 with-liburing: enabled 00:01:44.505 with-libvfn : disabled 00:01:44.505 with-spdk : false 00:01:44.505 00:01:44.505 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.505 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:44.505 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:44.505 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:44.505 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:44.763 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:44.763 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:44.763 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:44.763 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:44.763 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:44.763 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:44.763 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:44.763 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:44.763 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:44.763 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:44.763 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:44.763 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:44.763 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:44.763 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:44.763 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:44.763 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:44.763 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:44.763 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:44.763 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:44.763 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:44.763 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:44.763 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:44.763 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:44.763 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:44.763 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:44.763 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:44.763 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:44.763 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:44.763 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:44.763 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:44.763 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:45.022 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:45.022 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:45.022 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:45.022 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:45.022 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:45.022 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:45.022 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:45.022 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:45.022 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:45.022 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:45.022 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:45.022 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:45.022 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:45.022 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:45.022 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:45.022 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:45.022 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:45.022 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:45.022 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:45.022 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:45.022 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:45.022 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:45.022 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:45.022 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:45.022 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:45.022 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:45.022 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:45.022 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:45.022 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:45.022 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:45.022 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:45.022 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:45.022 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:45.022 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:45.022 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:45.281 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:45.281 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:45.281 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:45.281 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:45.281 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:45.281 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:45.281 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:45.281 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:45.281 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:45.281 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:45.281 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:45.281 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:45.281 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:45.281 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:45.281 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:45.281 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:45.281 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:45.281 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:45.281 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:45.281 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:45.281 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:45.281 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:45.281 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:45.540 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:45.540 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:45.540 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:45.540 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:45.540 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:45.540 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:45.540 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:45.540 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:45.540 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:45.540 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:45.540 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:45.540 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:45.540 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:45.540 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:45.540 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:45.540 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:45.540 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:45.540 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:45.540 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:45.540 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:45.540 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:45.540 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:45.540 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:45.540 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:45.540 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:45.540 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:45.540 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:45.540 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:45.540 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:45.540 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:45.540 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:45.540 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:45.540 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:45.540 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:45.540 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:45.540 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:45.799 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:45.799 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:45.799 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:45.799 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:45.799 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:45.799 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:45.799 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:45.799 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:45.799 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:45.799 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:45.799 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:45.799 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:45.799 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:45.799 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:45.799 [143/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:45.799 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:45.799 [145/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:45.799 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:45.799 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:46.058 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:46.058 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:46.058 [150/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:46.058 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:46.058 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:46.058 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:46.058 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:46.058 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:46.058 [156/203] Linking target lib/libxnvme.so 00:01:46.058 [157/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:46.058 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:46.058 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:46.058 [160/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:46.058 [161/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:46.058 [162/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:46.058 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:46.058 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:46.058 [165/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:46.317 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:46.317 [167/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:46.317 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:46.317 [169/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:46.317 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:46.317 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:46.317 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:46.317 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:46.317 [174/203] Linking static target lib/libxnvme.a 00:01:46.577 [175/203] Linking target tests/xnvme_tests_async_intf 00:01:46.577 [176/203] Linking target tests/xnvme_tests_buf 00:01:46.577 [177/203] Linking target tests/xnvme_tests_cli 00:01:46.577 [178/203] Linking target tests/xnvme_tests_ioworker 00:01:46.577 [179/203] Linking target tests/xnvme_tests_enum 00:01:46.577 [180/203] Linking target tests/xnvme_tests_lblk 00:01:46.577 [181/203] Linking target tests/xnvme_tests_xnvme_file 00:01:46.577 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:46.577 [183/203] Linking target tests/xnvme_tests_znd_state 00:01:46.577 [184/203] Linking target tests/xnvme_tests_znd_append 00:01:46.577 [185/203] Linking target tests/xnvme_tests_scc 00:01:46.577 [186/203] Linking target tools/xdd 00:01:46.577 [187/203] Linking target tools/lblk 00:01:46.577 [188/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:46.577 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:46.577 [190/203] Linking target tools/xnvme 00:01:46.577 [191/203] Linking target tests/xnvme_tests_map 00:01:46.577 [192/203] Linking target tools/zoned 00:01:46.577 [193/203] Linking target examples/xnvme_single_async 00:01:46.577 [194/203] Linking target tools/xnvme_file 00:01:46.577 [195/203] Linking target tools/kvs 00:01:46.577 [196/203] Linking target tests/xnvme_tests_kvs 00:01:46.577 [197/203] Linking target examples/xnvme_enum 00:01:46.577 [198/203] Linking target examples/xnvme_io_async 00:01:46.577 [199/203] Linking target examples/zoned_io_sync 00:01:46.577 [200/203] Linking target examples/xnvme_dev 00:01:46.577 [201/203] Linking target examples/xnvme_hello 00:01:46.577 [202/203] Linking target examples/zoned_io_async 00:01:46.577 [203/203] Linking target examples/xnvme_single_sync 00:01:46.577 INFO: autodetecting backend as ninja 00:01:46.577 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:46.577 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:01:50.767 The Meson build system 00:01:50.767 Version: 1.5.0 00:01:50.767 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:50.767 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:50.767 Build type: native build 00:01:50.767 Program cat found: YES (/usr/bin/cat) 00:01:50.767 Project name: DPDK 00:01:50.767 Project version: 23.11.0 00:01:50.767 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:50.767 C linker for the host machine: cc ld.bfd 2.40-14 00:01:50.767 Host machine cpu family: x86_64 00:01:50.767 Host machine cpu: x86_64 00:01:50.767 Message: ## Building in Developer Mode ## 00:01:50.767 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.767 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.767 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.767 Program python3 found: YES (/usr/bin/python3) 00:01:50.767 Program cat found: YES (/usr/bin/cat) 00:01:50.767 Compiler for C supports arguments -march=native: YES 00:01:50.767 Checking for size of "void *" : 8 00:01:50.767 Checking for size of "void *" : 8 (cached) 00:01:50.767 Library m found: YES 00:01:50.767 Library numa found: YES 00:01:50.767 Has header "numaif.h" : YES 00:01:50.767 Library fdt found: NO 00:01:50.767 Library execinfo found: NO 00:01:50.767 Has header "execinfo.h" : YES 00:01:50.767 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:50.767 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.767 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.767 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.767 Run-time dependency openssl found: YES 3.1.1 00:01:50.767 Run-time dependency libpcap found: YES 1.10.4 00:01:50.767 Has header "pcap.h" with dependency libpcap: YES 00:01:50.767 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.767 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.767 Compiler for C supports arguments -Wformat: YES 00:01:50.767 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.767 Compiler for C supports arguments -Wformat-security: NO 00:01:50.767 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.767 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.767 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.767 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.767 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.767 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.767 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.767 Compiler for C supports arguments -Wundef: YES 00:01:50.767 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.767 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.767 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.767 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.767 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.767 Program objdump found: YES (/usr/bin/objdump) 00:01:50.767 Compiler for C supports arguments -mavx512f: YES 00:01:50.767 Checking if "AVX512 checking" compiles: YES 00:01:50.767 Fetching value of define "__SSE4_2__" : 1 00:01:50.767 Fetching value of define "__AES__" : 1 00:01:50.767 Fetching value of define "__AVX__" : 1 00:01:50.767 Fetching value of define "__AVX2__" : 1 00:01:50.767 Fetching value of define "__AVX512BW__" : 1 00:01:50.767 Fetching value of define "__AVX512CD__" : 1 00:01:50.767 Fetching value of define "__AVX512DQ__" : 1 00:01:50.767 Fetching value of define "__AVX512F__" : 1 00:01:50.767 Fetching value of define "__AVX512VL__" : 1 00:01:50.767 Fetching value of define "__PCLMUL__" : 1 00:01:50.767 Fetching value of define "__RDRND__" : 1 00:01:50.767 Fetching value of define "__RDSEED__" : 1 00:01:50.767 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:50.767 Fetching value of define "__znver1__" : (undefined) 00:01:50.767 Fetching value of define "__znver2__" : (undefined) 00:01:50.767 Fetching value of define "__znver3__" : (undefined) 00:01:50.767 Fetching value of define "__znver4__" : (undefined) 00:01:50.767 Library asan found: YES 00:01:50.767 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.767 Message: lib/log: Defining dependency "log" 00:01:50.767 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.767 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.767 Library rt found: YES 00:01:50.767 Checking for function "getentropy" : NO 00:01:50.767 Message: lib/eal: Defining dependency "eal" 00:01:50.767 Message: lib/ring: Defining dependency "ring" 00:01:50.767 Message: lib/rcu: Defining dependency "rcu" 00:01:50.767 Message: lib/mempool: Defining dependency "mempool" 00:01:50.767 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.767 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.767 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.767 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.767 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.767 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.767 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:50.767 Compiler for C supports arguments -mpclmul: YES 00:01:50.767 Compiler for C supports arguments -maes: YES 00:01:50.767 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.767 Compiler for C supports arguments -mavx512bw: YES 00:01:50.768 Compiler for C supports arguments -mavx512dq: YES 00:01:50.768 Compiler for C supports arguments -mavx512vl: YES 00:01:50.768 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.768 Compiler for C supports arguments -mavx2: YES 00:01:50.768 Compiler for C supports arguments -mavx: YES 00:01:50.768 Message: lib/net: Defining dependency "net" 00:01:50.768 Message: lib/meter: Defining dependency "meter" 00:01:50.768 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.768 Message: lib/pci: Defining dependency "pci" 00:01:50.768 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.768 Message: lib/hash: Defining dependency "hash" 00:01:50.768 Message: lib/timer: Defining dependency "timer" 00:01:50.768 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.768 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.768 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.768 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.768 Message: lib/power: Defining dependency "power" 00:01:50.768 Message: lib/reorder: Defining dependency "reorder" 00:01:50.768 Message: lib/security: Defining dependency "security" 00:01:50.768 Has header "linux/userfaultfd.h" : YES 00:01:50.768 Has header "linux/vduse.h" : YES 00:01:50.768 Message: lib/vhost: Defining dependency "vhost" 00:01:50.768 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.768 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.768 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.768 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.768 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.768 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.768 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.768 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.768 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.768 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.768 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:50.768 Configuring doxy-api-html.conf using configuration 00:01:50.768 Configuring doxy-api-man.conf using configuration 00:01:50.768 Program mandb found: YES (/usr/bin/mandb) 00:01:50.768 Program sphinx-build found: NO 00:01:50.768 Configuring rte_build_config.h using configuration 00:01:50.768 Message: 00:01:50.768 ================= 00:01:50.768 Applications Enabled 00:01:50.768 ================= 00:01:50.768 00:01:50.768 apps: 00:01:50.768 00:01:50.768 00:01:50.768 Message: 00:01:50.768 ================= 00:01:50.768 Libraries Enabled 00:01:50.768 ================= 00:01:50.768 00:01:50.768 libs: 00:01:50.768 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.768 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.768 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.768 00:01:50.768 Message: 00:01:50.768 =============== 00:01:50.768 Drivers Enabled 00:01:50.768 =============== 00:01:50.768 00:01:50.768 common: 00:01:50.768 00:01:50.768 bus: 00:01:50.768 pci, vdev, 00:01:50.768 mempool: 00:01:50.768 ring, 00:01:50.768 dma: 00:01:50.768 00:01:50.768 net: 00:01:50.768 00:01:50.768 crypto: 00:01:50.768 00:01:50.768 compress: 00:01:50.768 00:01:50.768 vdpa: 00:01:50.768 00:01:50.768 00:01:50.768 Message: 00:01:50.768 ================= 00:01:50.768 Content Skipped 00:01:50.768 ================= 00:01:50.768 00:01:50.768 apps: 00:01:50.768 dumpcap: explicitly disabled via build config 00:01:50.768 graph: explicitly disabled via build config 00:01:50.768 pdump: explicitly disabled via build config 00:01:50.768 proc-info: explicitly disabled via build config 00:01:50.768 test-acl: explicitly disabled via build config 00:01:50.768 test-bbdev: explicitly disabled via build config 00:01:50.768 test-cmdline: explicitly disabled via build config 00:01:50.768 test-compress-perf: explicitly disabled via build config 00:01:50.768 test-crypto-perf: explicitly disabled via build config 00:01:50.768 test-dma-perf: explicitly disabled via build config 00:01:50.768 test-eventdev: explicitly disabled via build config 00:01:50.768 test-fib: explicitly disabled via build config 00:01:50.768 test-flow-perf: explicitly disabled via build config 00:01:50.768 test-gpudev: explicitly disabled via build config 00:01:50.768 test-mldev: explicitly disabled via build config 00:01:50.768 test-pipeline: explicitly disabled via build config 00:01:50.768 test-pmd: explicitly disabled via build config 00:01:50.768 test-regex: explicitly disabled via build config 00:01:50.768 test-sad: explicitly disabled via build config 00:01:50.768 test-security-perf: explicitly disabled via build config 00:01:50.768 00:01:50.768 libs: 00:01:50.768 metrics: explicitly disabled via build config 00:01:50.768 acl: explicitly disabled via build config 00:01:50.768 bbdev: explicitly disabled via build config 00:01:50.768 bitratestats: explicitly disabled via build config 00:01:50.768 bpf: explicitly disabled via build config 00:01:50.768 cfgfile: explicitly disabled via build config 00:01:50.768 distributor: explicitly disabled via build config 00:01:50.768 efd: explicitly disabled via build config 00:01:50.768 eventdev: explicitly disabled via build config 00:01:50.768 dispatcher: explicitly disabled via build config 00:01:50.768 gpudev: explicitly disabled via build config 00:01:50.768 gro: explicitly disabled via build config 00:01:50.768 gso: explicitly disabled via build config 00:01:50.768 ip_frag: explicitly disabled via build config 00:01:50.768 jobstats: explicitly disabled via build config 00:01:50.768 latencystats: explicitly disabled via build config 00:01:50.768 lpm: explicitly disabled via build config 00:01:50.768 member: explicitly disabled via build config 00:01:50.768 pcapng: explicitly disabled via build config 00:01:50.768 rawdev: explicitly disabled via build config 00:01:50.768 regexdev: explicitly disabled via build config 00:01:50.768 mldev: explicitly disabled via build config 00:01:50.768 rib: explicitly disabled via build config 00:01:50.768 sched: explicitly disabled via build config 00:01:50.768 stack: explicitly disabled via build config 00:01:50.768 ipsec: explicitly disabled via build config 00:01:50.768 pdcp: explicitly disabled via build config 00:01:50.768 fib: explicitly disabled via build config 00:01:50.768 port: explicitly disabled via build config 00:01:50.768 pdump: explicitly disabled via build config 00:01:50.768 table: explicitly disabled via build config 00:01:50.768 pipeline: explicitly disabled via build config 00:01:50.768 graph: explicitly disabled via build config 00:01:50.768 node: explicitly disabled via build config 00:01:50.768 00:01:50.768 drivers: 00:01:50.768 common/cpt: not in enabled drivers build config 00:01:50.768 common/dpaax: not in enabled drivers build config 00:01:50.768 common/iavf: not in enabled drivers build config 00:01:50.768 common/idpf: not in enabled drivers build config 00:01:50.768 common/mvep: not in enabled drivers build config 00:01:50.768 common/octeontx: not in enabled drivers build config 00:01:50.768 bus/auxiliary: not in enabled drivers build config 00:01:50.768 bus/cdx: not in enabled drivers build config 00:01:50.768 bus/dpaa: not in enabled drivers build config 00:01:50.768 bus/fslmc: not in enabled drivers build config 00:01:50.769 bus/ifpga: not in enabled drivers build config 00:01:50.769 bus/platform: not in enabled drivers build config 00:01:50.769 bus/vmbus: not in enabled drivers build config 00:01:50.769 common/cnxk: not in enabled drivers build config 00:01:50.769 common/mlx5: not in enabled drivers build config 00:01:50.769 common/nfp: not in enabled drivers build config 00:01:50.769 common/qat: not in enabled drivers build config 00:01:50.769 common/sfc_efx: not in enabled drivers build config 00:01:50.769 mempool/bucket: not in enabled drivers build config 00:01:50.769 mempool/cnxk: not in enabled drivers build config 00:01:50.769 mempool/dpaa: not in enabled drivers build config 00:01:50.769 mempool/dpaa2: not in enabled drivers build config 00:01:50.769 mempool/octeontx: not in enabled drivers build config 00:01:50.769 mempool/stack: not in enabled drivers build config 00:01:50.769 dma/cnxk: not in enabled drivers build config 00:01:50.769 dma/dpaa: not in enabled drivers build config 00:01:50.769 dma/dpaa2: not in enabled drivers build config 00:01:50.769 dma/hisilicon: not in enabled drivers build config 00:01:50.769 dma/idxd: not in enabled drivers build config 00:01:50.769 dma/ioat: not in enabled drivers build config 00:01:50.769 dma/skeleton: not in enabled drivers build config 00:01:50.769 net/af_packet: not in enabled drivers build config 00:01:50.769 net/af_xdp: not in enabled drivers build config 00:01:50.769 net/ark: not in enabled drivers build config 00:01:50.769 net/atlantic: not in enabled drivers build config 00:01:50.769 net/avp: not in enabled drivers build config 00:01:50.769 net/axgbe: not in enabled drivers build config 00:01:50.769 net/bnx2x: not in enabled drivers build config 00:01:50.769 net/bnxt: not in enabled drivers build config 00:01:50.769 net/bonding: not in enabled drivers build config 00:01:50.769 net/cnxk: not in enabled drivers build config 00:01:50.769 net/cpfl: not in enabled drivers build config 00:01:50.769 net/cxgbe: not in enabled drivers build config 00:01:50.769 net/dpaa: not in enabled drivers build config 00:01:50.769 net/dpaa2: not in enabled drivers build config 00:01:50.769 net/e1000: not in enabled drivers build config 00:01:50.769 net/ena: not in enabled drivers build config 00:01:50.769 net/enetc: not in enabled drivers build config 00:01:50.769 net/enetfec: not in enabled drivers build config 00:01:50.769 net/enic: not in enabled drivers build config 00:01:50.769 net/failsafe: not in enabled drivers build config 00:01:50.769 net/fm10k: not in enabled drivers build config 00:01:50.769 net/gve: not in enabled drivers build config 00:01:50.769 net/hinic: not in enabled drivers build config 00:01:50.769 net/hns3: not in enabled drivers build config 00:01:50.769 net/i40e: not in enabled drivers build config 00:01:50.769 net/iavf: not in enabled drivers build config 00:01:50.769 net/ice: not in enabled drivers build config 00:01:50.769 net/idpf: not in enabled drivers build config 00:01:50.769 net/igc: not in enabled drivers build config 00:01:50.769 net/ionic: not in enabled drivers build config 00:01:50.769 net/ipn3ke: not in enabled drivers build config 00:01:50.769 net/ixgbe: not in enabled drivers build config 00:01:50.769 net/mana: not in enabled drivers build config 00:01:50.769 net/memif: not in enabled drivers build config 00:01:50.769 net/mlx4: not in enabled drivers build config 00:01:50.769 net/mlx5: not in enabled drivers build config 00:01:50.769 net/mvneta: not in enabled drivers build config 00:01:50.769 net/mvpp2: not in enabled drivers build config 00:01:50.769 net/netvsc: not in enabled drivers build config 00:01:50.769 net/nfb: not in enabled drivers build config 00:01:50.769 net/nfp: not in enabled drivers build config 00:01:50.769 net/ngbe: not in enabled drivers build config 00:01:50.769 net/null: not in enabled drivers build config 00:01:50.769 net/octeontx: not in enabled drivers build config 00:01:50.769 net/octeon_ep: not in enabled drivers build config 00:01:50.769 net/pcap: not in enabled drivers build config 00:01:50.769 net/pfe: not in enabled drivers build config 00:01:50.769 net/qede: not in enabled drivers build config 00:01:50.769 net/ring: not in enabled drivers build config 00:01:50.769 net/sfc: not in enabled drivers build config 00:01:50.769 net/softnic: not in enabled drivers build config 00:01:50.769 net/tap: not in enabled drivers build config 00:01:50.769 net/thunderx: not in enabled drivers build config 00:01:50.769 net/txgbe: not in enabled drivers build config 00:01:50.769 net/vdev_netvsc: not in enabled drivers build config 00:01:50.769 net/vhost: not in enabled drivers build config 00:01:50.769 net/virtio: not in enabled drivers build config 00:01:50.769 net/vmxnet3: not in enabled drivers build config 00:01:50.769 raw/*: missing internal dependency, "rawdev" 00:01:50.769 crypto/armv8: not in enabled drivers build config 00:01:50.769 crypto/bcmfs: not in enabled drivers build config 00:01:50.769 crypto/caam_jr: not in enabled drivers build config 00:01:50.769 crypto/ccp: not in enabled drivers build config 00:01:50.769 crypto/cnxk: not in enabled drivers build config 00:01:50.769 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.769 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.769 crypto/ipsec_mb: not in enabled drivers build config 00:01:50.769 crypto/mlx5: not in enabled drivers build config 00:01:50.769 crypto/mvsam: not in enabled drivers build config 00:01:50.769 crypto/nitrox: not in enabled drivers build config 00:01:50.769 crypto/null: not in enabled drivers build config 00:01:50.769 crypto/octeontx: not in enabled drivers build config 00:01:50.769 crypto/openssl: not in enabled drivers build config 00:01:50.769 crypto/scheduler: not in enabled drivers build config 00:01:50.769 crypto/uadk: not in enabled drivers build config 00:01:50.769 crypto/virtio: not in enabled drivers build config 00:01:50.769 compress/isal: not in enabled drivers build config 00:01:50.769 compress/mlx5: not in enabled drivers build config 00:01:50.769 compress/octeontx: not in enabled drivers build config 00:01:50.769 compress/zlib: not in enabled drivers build config 00:01:50.769 regex/*: missing internal dependency, "regexdev" 00:01:50.769 ml/*: missing internal dependency, "mldev" 00:01:50.769 vdpa/ifc: not in enabled drivers build config 00:01:50.769 vdpa/mlx5: not in enabled drivers build config 00:01:50.769 vdpa/nfp: not in enabled drivers build config 00:01:50.769 vdpa/sfc: not in enabled drivers build config 00:01:50.769 event/*: missing internal dependency, "eventdev" 00:01:50.769 baseband/*: missing internal dependency, "bbdev" 00:01:50.769 gpu/*: missing internal dependency, "gpudev" 00:01:50.769 00:01:50.769 00:01:51.028 Build targets in project: 84 00:01:51.028 00:01:51.028 DPDK 23.11.0 00:01:51.028 00:01:51.028 User defined options 00:01:51.028 buildtype : debug 00:01:51.028 default_library : shared 00:01:51.028 libdir : lib 00:01:51.028 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.028 b_sanitize : address 00:01:51.028 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:51.028 c_link_args : 00:01:51.028 cpu_instruction_set: native 00:01:51.028 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:51.028 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:51.028 enable_docs : false 00:01:51.028 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.028 enable_kmods : false 00:01:51.028 tests : false 00:01:51.028 00:01:51.028 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.286 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:51.286 [1/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.286 [2/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.286 [3/264] Linking static target lib/librte_kvargs.a 00:01:51.545 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.545 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.545 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.545 [7/264] Linking static target lib/librte_log.a 00:01:51.545 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.545 [9/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.545 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.803 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.803 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.803 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.803 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.803 [15/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.803 [16/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.803 [17/264] Linking static target lib/librte_telemetry.a 00:01:51.803 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.061 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.061 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.061 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.061 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.321 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.321 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.321 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.321 [26/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.321 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.321 [28/264] Linking target lib/librte_log.so.24.0 00:01:52.321 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.321 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.579 [31/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:52.579 [32/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.579 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.579 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.579 [35/264] Linking target lib/librte_kvargs.so.24.0 00:01:52.579 [36/264] Linking target lib/librte_telemetry.so.24.0 00:01:52.579 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.579 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.579 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.579 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.579 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.579 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.579 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.838 [44/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:52.838 [45/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:52.838 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.838 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.838 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.838 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.838 [50/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.107 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.107 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.107 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.107 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.108 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.108 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.108 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.108 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.381 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.381 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.381 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.381 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.381 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.381 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.381 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.381 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.381 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.381 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.639 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.639 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.639 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.639 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.639 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.639 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.639 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.639 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.639 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.639 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.897 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.897 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.897 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.154 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.154 [83/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.154 [84/264] Linking static target lib/librte_ring.a 00:01:54.154 [85/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.154 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.154 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:54.154 [88/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.154 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.154 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.154 [91/264] Linking static target lib/librte_mempool.a 00:01:54.154 [92/264] Linking static target lib/librte_eal.a 00:01:54.411 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.411 [94/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.411 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:54.411 [96/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.411 [97/264] Linking static target lib/librte_rcu.a 00:01:54.669 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:54.669 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:54.669 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:54.669 [101/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:54.954 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:54.954 [103/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:54.954 [104/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:54.954 [105/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.954 [106/264] Linking static target lib/librte_meter.a 00:01:55.213 [107/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:55.213 [108/264] Linking static target lib/librte_net.a 00:01:55.213 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.213 [110/264] Linking static target lib/librte_mbuf.a 00:01:55.213 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.213 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.213 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.213 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.470 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.470 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.470 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.470 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.727 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.985 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.985 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.985 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.985 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.985 [124/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.985 [125/264] Linking static target lib/librte_pci.a 00:01:56.242 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.242 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.242 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.243 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.243 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.243 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.243 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.243 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.243 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.243 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.243 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.243 [137/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.500 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.500 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.500 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.500 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.500 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.500 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.500 [144/264] Linking static target lib/librte_cmdline.a 00:01:56.758 [145/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.758 [146/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.758 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.758 [148/264] Linking static target lib/librte_timer.a 00:01:56.758 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.758 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.015 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.015 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.015 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.015 [154/264] Linking static target lib/librte_compressdev.a 00:01:57.015 [155/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.272 [156/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.272 [157/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.272 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.272 [159/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.272 [160/264] Linking static target lib/librte_ethdev.a 00:01:57.272 [161/264] Linking static target lib/librte_hash.a 00:01:57.272 [162/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.272 [163/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.272 [164/264] Linking static target lib/librte_dmadev.a 00:01:57.272 [165/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.530 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.530 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.530 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.530 [169/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.788 [170/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.788 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.788 [172/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.788 [173/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.788 [174/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.788 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.045 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.045 [177/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.045 [178/264] Linking static target lib/librte_cryptodev.a 00:01:58.045 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:58.045 [180/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.045 [181/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.045 [182/264] Linking static target lib/librte_power.a 00:01:58.302 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.302 [184/264] Linking static target lib/librte_reorder.a 00:01:58.302 [185/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.302 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.302 [187/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.302 [188/264] Linking static target lib/librte_security.a 00:01:58.560 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.560 [190/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.560 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.819 [192/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.077 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.077 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.077 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.077 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.077 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.336 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.336 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.336 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.336 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.336 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.336 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.336 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.594 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.594 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.594 [207/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.594 [208/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.594 [209/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.594 [210/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.594 [211/264] Linking static target drivers/librte_bus_pci.a 00:01:59.594 [212/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.594 [213/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.594 [214/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.594 [215/264] Linking static target drivers/librte_bus_vdev.a 00:01:59.594 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.594 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.852 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.852 [219/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.852 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.852 [221/264] Linking static target drivers/librte_mempool_ring.a 00:01:59.852 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.852 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.783 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.713 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.713 [226/264] Linking target lib/librte_eal.so.24.0 00:02:01.713 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:01.713 [228/264] Linking target lib/librte_timer.so.24.0 00:02:01.713 [229/264] Linking target lib/librte_meter.so.24.0 00:02:01.713 [230/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:01.714 [231/264] Linking target lib/librte_dmadev.so.24.0 00:02:01.714 [232/264] Linking target lib/librte_pci.so.24.0 00:02:01.714 [233/264] Linking target lib/librte_ring.so.24.0 00:02:01.971 [234/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:01.971 [235/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:01.971 [236/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:01.971 [237/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:01.971 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:01.971 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:01.971 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:01.971 [241/264] Linking target lib/librte_rcu.so.24.0 00:02:01.971 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:01.971 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:01.971 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:01.971 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:02.230 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:02.230 [247/264] Linking target lib/librte_reorder.so.24.0 00:02:02.230 [248/264] Linking target lib/librte_net.so.24.0 00:02:02.230 [249/264] Linking target lib/librte_compressdev.so.24.0 00:02:02.230 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:02.489 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:02.489 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:02.489 [253/264] Linking target lib/librte_cmdline.so.24.0 00:02:02.489 [254/264] Linking target lib/librte_hash.so.24.0 00:02:02.489 [255/264] Linking target lib/librte_security.so.24.0 00:02:02.489 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:02.748 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.748 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:03.007 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.007 [260/264] Linking target lib/librte_power.so.24.0 00:02:03.007 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.007 [262/264] Linking static target lib/librte_vhost.a 00:02:04.389 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.389 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:04.389 INFO: autodetecting backend as ninja 00:02:04.389 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:05.326 CC lib/ut_mock/mock.o 00:02:05.326 CC lib/ut/ut.o 00:02:05.326 CC lib/log/log.o 00:02:05.326 CC lib/log/log_deprecated.o 00:02:05.326 CC lib/log/log_flags.o 00:02:05.326 LIB libspdk_ut_mock.a 00:02:05.326 SO libspdk_ut_mock.so.5.0 00:02:05.326 LIB libspdk_ut.a 00:02:05.326 LIB libspdk_log.a 00:02:05.326 SO libspdk_ut.so.1.0 00:02:05.326 SO libspdk_log.so.6.1 00:02:05.326 SYMLINK libspdk_ut_mock.so 00:02:05.584 SYMLINK libspdk_ut.so 00:02:05.584 SYMLINK libspdk_log.so 00:02:05.584 CC lib/util/base64.o 00:02:05.584 CC lib/util/cpuset.o 00:02:05.584 CC lib/util/crc16.o 00:02:05.584 CC lib/util/bit_array.o 00:02:05.584 CC lib/util/crc32c.o 00:02:05.584 CC lib/util/crc32.o 00:02:05.584 CC lib/dma/dma.o 00:02:05.584 CC lib/ioat/ioat.o 00:02:05.584 CXX lib/trace_parser/trace.o 00:02:05.584 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.584 CC lib/util/crc32_ieee.o 00:02:05.584 CC lib/vfio_user/host/vfio_user.o 00:02:05.584 CC lib/util/crc64.o 00:02:05.584 CC lib/util/dif.o 00:02:05.843 LIB libspdk_dma.a 00:02:05.843 CC lib/util/fd.o 00:02:05.843 CC lib/util/file.o 00:02:05.843 SO libspdk_dma.so.3.0 00:02:05.843 CC lib/util/hexlify.o 00:02:05.843 CC lib/util/iov.o 00:02:05.843 SYMLINK libspdk_dma.so 00:02:05.843 CC lib/util/math.o 00:02:05.843 CC lib/util/pipe.o 00:02:05.843 CC lib/util/strerror_tls.o 00:02:05.843 LIB libspdk_ioat.a 00:02:05.843 CC lib/util/string.o 00:02:05.843 SO libspdk_ioat.so.6.0 00:02:05.843 LIB libspdk_vfio_user.a 00:02:05.843 CC lib/util/uuid.o 00:02:05.843 SO libspdk_vfio_user.so.4.0 00:02:05.843 CC lib/util/fd_group.o 00:02:05.843 SYMLINK libspdk_ioat.so 00:02:05.843 CC lib/util/xor.o 00:02:05.843 SYMLINK libspdk_vfio_user.so 00:02:05.843 CC lib/util/zipf.o 00:02:06.413 LIB libspdk_util.a 00:02:06.413 SO libspdk_util.so.8.0 00:02:06.413 LIB libspdk_trace_parser.a 00:02:06.413 SO libspdk_trace_parser.so.4.0 00:02:06.413 SYMLINK libspdk_util.so 00:02:06.413 SYMLINK libspdk_trace_parser.so 00:02:06.672 CC lib/json/json_parse.o 00:02:06.672 CC lib/env_dpdk/env.o 00:02:06.672 CC lib/env_dpdk/memory.o 00:02:06.672 CC lib/json/json_write.o 00:02:06.672 CC lib/json/json_util.o 00:02:06.672 CC lib/env_dpdk/pci.o 00:02:06.672 CC lib/idxd/idxd.o 00:02:06.672 CC lib/conf/conf.o 00:02:06.672 CC lib/rdma/common.o 00:02:06.672 CC lib/vmd/vmd.o 00:02:06.672 CC lib/vmd/led.o 00:02:06.672 LIB libspdk_conf.a 00:02:06.672 CC lib/env_dpdk/init.o 00:02:06.672 SO libspdk_conf.so.5.0 00:02:06.672 CC lib/rdma/rdma_verbs.o 00:02:06.931 CC lib/env_dpdk/threads.o 00:02:06.931 LIB libspdk_json.a 00:02:06.931 SYMLINK libspdk_conf.so 00:02:06.931 CC lib/env_dpdk/pci_ioat.o 00:02:06.931 SO libspdk_json.so.5.1 00:02:06.931 SYMLINK libspdk_json.so 00:02:06.931 CC lib/idxd/idxd_user.o 00:02:06.931 CC lib/idxd/idxd_kernel.o 00:02:06.931 CC lib/env_dpdk/pci_virtio.o 00:02:06.931 CC lib/env_dpdk/pci_vmd.o 00:02:06.931 LIB libspdk_rdma.a 00:02:06.931 SO libspdk_rdma.so.5.0 00:02:06.931 CC lib/env_dpdk/pci_idxd.o 00:02:06.931 CC lib/env_dpdk/pci_event.o 00:02:06.931 SYMLINK libspdk_rdma.so 00:02:06.931 CC lib/env_dpdk/sigbus_handler.o 00:02:07.191 CC lib/env_dpdk/pci_dpdk.o 00:02:07.191 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.191 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.191 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.191 LIB libspdk_idxd.a 00:02:07.191 SO libspdk_idxd.so.11.0 00:02:07.191 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.191 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:07.191 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.191 SYMLINK libspdk_idxd.so 00:02:07.191 LIB libspdk_vmd.a 00:02:07.191 SO libspdk_vmd.so.5.0 00:02:07.191 SYMLINK libspdk_vmd.so 00:02:07.451 LIB libspdk_jsonrpc.a 00:02:07.451 SO libspdk_jsonrpc.so.5.1 00:02:07.451 SYMLINK libspdk_jsonrpc.so 00:02:07.710 CC lib/rpc/rpc.o 00:02:07.710 LIB libspdk_env_dpdk.a 00:02:07.710 LIB libspdk_rpc.a 00:02:07.710 SO libspdk_rpc.so.5.0 00:02:07.710 SO libspdk_env_dpdk.so.13.0 00:02:07.710 SYMLINK libspdk_rpc.so 00:02:07.971 SYMLINK libspdk_env_dpdk.so 00:02:07.971 CC lib/notify/notify.o 00:02:07.971 CC lib/notify/notify_rpc.o 00:02:07.971 CC lib/sock/sock.o 00:02:07.971 CC lib/trace/trace.o 00:02:07.971 CC lib/sock/sock_rpc.o 00:02:07.971 CC lib/trace/trace_flags.o 00:02:07.971 CC lib/trace/trace_rpc.o 00:02:07.971 LIB libspdk_notify.a 00:02:07.971 SO libspdk_notify.so.5.0 00:02:08.231 SYMLINK libspdk_notify.so 00:02:08.231 LIB libspdk_trace.a 00:02:08.231 SO libspdk_trace.so.9.0 00:02:08.231 SYMLINK libspdk_trace.so 00:02:08.231 LIB libspdk_sock.a 00:02:08.231 SO libspdk_sock.so.8.0 00:02:08.490 CC lib/thread/iobuf.o 00:02:08.490 CC lib/thread/thread.o 00:02:08.490 SYMLINK libspdk_sock.so 00:02:08.490 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.490 CC lib/nvme/nvme_ns_cmd.o 00:02:08.490 CC lib/nvme/nvme_ctrlr.o 00:02:08.490 CC lib/nvme/nvme_fabric.o 00:02:08.490 CC lib/nvme/nvme_ns.o 00:02:08.490 CC lib/nvme/nvme_pcie_common.o 00:02:08.490 CC lib/nvme/nvme_qpair.o 00:02:08.490 CC lib/nvme/nvme_pcie.o 00:02:08.748 CC lib/nvme/nvme.o 00:02:09.007 CC lib/nvme/nvme_quirks.o 00:02:09.007 CC lib/nvme/nvme_transport.o 00:02:09.007 CC lib/nvme/nvme_discovery.o 00:02:09.265 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.265 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.265 CC lib/nvme/nvme_tcp.o 00:02:09.524 CC lib/nvme/nvme_opal.o 00:02:09.524 CC lib/nvme/nvme_io_msg.o 00:02:09.524 CC lib/nvme/nvme_poll_group.o 00:02:09.524 CC lib/nvme/nvme_zns.o 00:02:09.524 CC lib/nvme/nvme_cuse.o 00:02:09.783 CC lib/nvme/nvme_vfio_user.o 00:02:09.783 CC lib/nvme/nvme_rdma.o 00:02:09.783 LIB libspdk_thread.a 00:02:09.783 SO libspdk_thread.so.9.0 00:02:10.041 SYMLINK libspdk_thread.so 00:02:10.041 CC lib/accel/accel.o 00:02:10.041 CC lib/blob/blobstore.o 00:02:10.041 CC lib/init/json_config.o 00:02:10.041 CC lib/virtio/virtio.o 00:02:10.041 CC lib/virtio/virtio_vhost_user.o 00:02:10.041 CC lib/blob/request.o 00:02:10.041 CC lib/virtio/virtio_vfio_user.o 00:02:10.041 CC lib/init/subsystem.o 00:02:10.300 CC lib/init/subsystem_rpc.o 00:02:10.300 CC lib/init/rpc.o 00:02:10.300 CC lib/accel/accel_rpc.o 00:02:10.300 CC lib/virtio/virtio_pci.o 00:02:10.300 CC lib/blob/zeroes.o 00:02:10.300 LIB libspdk_init.a 00:02:10.300 CC lib/blob/blob_bs_dev.o 00:02:10.558 CC lib/accel/accel_sw.o 00:02:10.558 SO libspdk_init.so.4.0 00:02:10.558 SYMLINK libspdk_init.so 00:02:10.558 LIB libspdk_virtio.a 00:02:10.558 CC lib/event/app_rpc.o 00:02:10.558 CC lib/event/log_rpc.o 00:02:10.558 CC lib/event/app.o 00:02:10.558 CC lib/event/reactor.o 00:02:10.558 SO libspdk_virtio.so.6.0 00:02:10.816 CC lib/event/scheduler_static.o 00:02:10.816 SYMLINK libspdk_virtio.so 00:02:10.816 LIB libspdk_accel.a 00:02:10.816 SO libspdk_accel.so.14.0 00:02:10.816 SYMLINK libspdk_accel.so 00:02:11.075 LIB libspdk_nvme.a 00:02:11.075 CC lib/bdev/bdev_rpc.o 00:02:11.075 CC lib/bdev/bdev.o 00:02:11.075 CC lib/bdev/part.o 00:02:11.075 CC lib/bdev/scsi_nvme.o 00:02:11.075 CC lib/bdev/bdev_zone.o 00:02:11.075 LIB libspdk_event.a 00:02:11.075 SO libspdk_event.so.12.0 00:02:11.075 SYMLINK libspdk_event.so 00:02:11.075 SO libspdk_nvme.so.12.0 00:02:11.336 SYMLINK libspdk_nvme.so 00:02:13.252 LIB libspdk_blob.a 00:02:13.252 SO libspdk_blob.so.10.1 00:02:13.252 SYMLINK libspdk_blob.so 00:02:13.252 CC lib/blobfs/blobfs.o 00:02:13.252 CC lib/lvol/lvol.o 00:02:13.252 CC lib/blobfs/tree.o 00:02:13.827 LIB libspdk_bdev.a 00:02:13.827 SO libspdk_bdev.so.14.0 00:02:13.827 SYMLINK libspdk_bdev.so 00:02:13.827 CC lib/scsi/lun.o 00:02:13.827 CC lib/scsi/dev.o 00:02:13.827 CC lib/scsi/port.o 00:02:13.827 CC lib/scsi/scsi.o 00:02:14.105 CC lib/ublk/ublk.o 00:02:14.105 CC lib/nvmf/ctrlr.o 00:02:14.105 CC lib/ftl/ftl_core.o 00:02:14.105 CC lib/nbd/nbd.o 00:02:14.106 LIB libspdk_blobfs.a 00:02:14.106 CC lib/nbd/nbd_rpc.o 00:02:14.106 SO libspdk_blobfs.so.9.0 00:02:14.106 CC lib/scsi/scsi_bdev.o 00:02:14.106 LIB libspdk_lvol.a 00:02:14.106 SO libspdk_lvol.so.9.1 00:02:14.106 SYMLINK libspdk_blobfs.so 00:02:14.106 CC lib/nvmf/ctrlr_discovery.o 00:02:14.106 CC lib/nvmf/ctrlr_bdev.o 00:02:14.106 SYMLINK libspdk_lvol.so 00:02:14.106 CC lib/nvmf/subsystem.o 00:02:14.106 CC lib/nvmf/nvmf.o 00:02:14.391 CC lib/nvmf/nvmf_rpc.o 00:02:14.391 CC lib/ftl/ftl_init.o 00:02:14.391 LIB libspdk_nbd.a 00:02:14.391 SO libspdk_nbd.so.6.0 00:02:14.391 SYMLINK libspdk_nbd.so 00:02:14.391 CC lib/ftl/ftl_layout.o 00:02:14.391 CC lib/ublk/ublk_rpc.o 00:02:14.648 CC lib/scsi/scsi_pr.o 00:02:14.649 CC lib/scsi/scsi_rpc.o 00:02:14.649 CC lib/scsi/task.o 00:02:14.649 LIB libspdk_ublk.a 00:02:14.649 SO libspdk_ublk.so.2.0 00:02:14.649 CC lib/nvmf/transport.o 00:02:14.649 SYMLINK libspdk_ublk.so 00:02:14.649 CC lib/nvmf/tcp.o 00:02:14.649 CC lib/ftl/ftl_debug.o 00:02:14.649 CC lib/ftl/ftl_io.o 00:02:14.906 CC lib/nvmf/rdma.o 00:02:14.906 LIB libspdk_scsi.a 00:02:14.906 SO libspdk_scsi.so.8.0 00:02:14.906 CC lib/ftl/ftl_sb.o 00:02:14.906 SYMLINK libspdk_scsi.so 00:02:14.906 CC lib/ftl/ftl_l2p.o 00:02:14.906 CC lib/ftl/ftl_l2p_flat.o 00:02:14.906 CC lib/ftl/ftl_nv_cache.o 00:02:14.906 CC lib/ftl/ftl_band.o 00:02:15.164 CC lib/ftl/ftl_band_ops.o 00:02:15.164 CC lib/ftl/ftl_writer.o 00:02:15.164 CC lib/iscsi/conn.o 00:02:15.164 CC lib/iscsi/init_grp.o 00:02:15.164 CC lib/iscsi/iscsi.o 00:02:15.164 CC lib/ftl/ftl_rq.o 00:02:15.164 CC lib/ftl/ftl_reloc.o 00:02:15.423 CC lib/ftl/ftl_l2p_cache.o 00:02:15.423 CC lib/ftl/ftl_p2l.o 00:02:15.423 CC lib/iscsi/md5.o 00:02:15.423 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.423 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.423 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.681 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.681 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.681 CC lib/iscsi/param.o 00:02:15.681 CC lib/iscsi/portal_grp.o 00:02:15.681 CC lib/iscsi/tgt_node.o 00:02:15.681 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.938 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.938 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.938 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.938 CC lib/iscsi/iscsi_subsystem.o 00:02:15.938 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.938 CC lib/vhost/vhost.o 00:02:15.938 CC lib/iscsi/iscsi_rpc.o 00:02:15.938 CC lib/iscsi/task.o 00:02:15.938 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.196 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.196 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.196 CC lib/ftl/utils/ftl_conf.o 00:02:16.196 CC lib/ftl/utils/ftl_md.o 00:02:16.196 CC lib/ftl/utils/ftl_mempool.o 00:02:16.196 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.196 CC lib/vhost/vhost_rpc.o 00:02:16.196 CC lib/ftl/utils/ftl_property.o 00:02:16.196 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.454 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.454 CC lib/vhost/vhost_scsi.o 00:02:16.454 CC lib/vhost/vhost_blk.o 00:02:16.454 LIB libspdk_iscsi.a 00:02:16.454 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.454 SO libspdk_iscsi.so.7.0 00:02:16.454 CC lib/vhost/rte_vhost_user.o 00:02:16.454 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.454 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.712 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.712 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.712 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.712 SYMLINK libspdk_iscsi.so 00:02:16.712 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.712 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.712 CC lib/ftl/base/ftl_base_dev.o 00:02:16.712 LIB libspdk_nvmf.a 00:02:16.712 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.712 CC lib/ftl/ftl_trace.o 00:02:16.712 SO libspdk_nvmf.so.17.0 00:02:16.971 SYMLINK libspdk_nvmf.so 00:02:16.971 LIB libspdk_ftl.a 00:02:17.229 SO libspdk_ftl.so.8.0 00:02:17.229 LIB libspdk_vhost.a 00:02:17.229 SO libspdk_vhost.so.7.1 00:02:17.229 SYMLINK libspdk_ftl.so 00:02:17.488 SYMLINK libspdk_vhost.so 00:02:17.488 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.488 CC module/blob/bdev/blob_bdev.o 00:02:17.488 CC module/sock/posix/posix.o 00:02:17.488 CC module/accel/error/accel_error.o 00:02:17.488 CC module/accel/iaa/accel_iaa.o 00:02:17.488 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.488 CC module/accel/ioat/accel_ioat.o 00:02:17.488 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.488 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.488 CC module/accel/dsa/accel_dsa.o 00:02:17.747 LIB libspdk_env_dpdk_rpc.a 00:02:17.747 SO libspdk_env_dpdk_rpc.so.5.0 00:02:17.747 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.747 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.747 CC module/accel/error/accel_error_rpc.o 00:02:17.747 LIB libspdk_scheduler_gscheduler.a 00:02:17.747 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.747 LIB libspdk_scheduler_dynamic.a 00:02:17.747 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.747 SO libspdk_scheduler_gscheduler.so.3.0 00:02:17.747 SO libspdk_scheduler_dynamic.so.3.0 00:02:17.747 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:17.747 LIB libspdk_blob_bdev.a 00:02:17.747 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.747 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.747 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.747 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.747 SO libspdk_blob_bdev.so.10.1 00:02:17.747 LIB libspdk_accel_ioat.a 00:02:17.747 LIB libspdk_accel_error.a 00:02:17.747 LIB libspdk_accel_iaa.a 00:02:17.747 SO libspdk_accel_error.so.1.0 00:02:17.747 SO libspdk_accel_ioat.so.5.0 00:02:17.747 SYMLINK libspdk_blob_bdev.so 00:02:17.747 SO libspdk_accel_iaa.so.2.0 00:02:17.747 SYMLINK libspdk_accel_error.so 00:02:17.747 SYMLINK libspdk_accel_ioat.so 00:02:17.747 SYMLINK libspdk_accel_iaa.so 00:02:18.005 LIB libspdk_accel_dsa.a 00:02:18.005 SO libspdk_accel_dsa.so.4.0 00:02:18.005 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.005 CC module/bdev/error/vbdev_error.o 00:02:18.005 CC module/bdev/gpt/gpt.o 00:02:18.005 CC module/bdev/delay/vbdev_delay.o 00:02:18.005 CC module/bdev/null/bdev_null.o 00:02:18.005 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.005 CC module/bdev/nvme/bdev_nvme.o 00:02:18.005 CC module/bdev/malloc/bdev_malloc.o 00:02:18.005 SYMLINK libspdk_accel_dsa.so 00:02:18.005 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.005 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.005 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.005 LIB libspdk_sock_posix.a 00:02:18.005 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.264 SO libspdk_sock_posix.so.5.0 00:02:18.264 CC module/bdev/null/bdev_null_rpc.o 00:02:18.264 SYMLINK libspdk_sock_posix.so 00:02:18.264 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.264 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.264 LIB libspdk_bdev_error.a 00:02:18.264 LIB libspdk_blobfs_bdev.a 00:02:18.264 SO libspdk_bdev_error.so.5.0 00:02:18.264 SO libspdk_blobfs_bdev.so.5.0 00:02:18.264 SYMLINK libspdk_bdev_error.so 00:02:18.264 LIB libspdk_bdev_delay.a 00:02:18.264 LIB libspdk_bdev_malloc.a 00:02:18.264 SO libspdk_bdev_malloc.so.5.0 00:02:18.264 SYMLINK libspdk_blobfs_bdev.so 00:02:18.264 SO libspdk_bdev_delay.so.5.0 00:02:18.264 LIB libspdk_bdev_null.a 00:02:18.264 CC module/bdev/nvme/nvme_rpc.o 00:02:18.264 LIB libspdk_bdev_gpt.a 00:02:18.264 SO libspdk_bdev_null.so.5.0 00:02:18.264 SYMLINK libspdk_bdev_delay.so 00:02:18.264 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.264 SYMLINK libspdk_bdev_malloc.so 00:02:18.264 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.264 SO libspdk_bdev_gpt.so.5.0 00:02:18.264 CC module/bdev/raid/bdev_raid.o 00:02:18.522 LIB libspdk_bdev_lvol.a 00:02:18.522 SO libspdk_bdev_lvol.so.5.0 00:02:18.522 SYMLINK libspdk_bdev_null.so 00:02:18.522 SYMLINK libspdk_bdev_gpt.so 00:02:18.522 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.522 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.522 CC module/bdev/split/vbdev_split.o 00:02:18.522 SYMLINK libspdk_bdev_lvol.so 00:02:18.522 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.522 CC module/bdev/raid/raid0.o 00:02:18.522 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.522 CC module/bdev/raid/raid1.o 00:02:18.522 CC module/bdev/raid/concat.o 00:02:18.522 LIB libspdk_bdev_split.a 00:02:18.522 CC module/bdev/nvme/vbdev_opal.o 00:02:18.780 SO libspdk_bdev_split.so.5.0 00:02:18.780 LIB libspdk_bdev_passthru.a 00:02:18.780 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.780 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.780 SYMLINK libspdk_bdev_split.so 00:02:18.780 SO libspdk_bdev_passthru.so.5.0 00:02:18.780 SYMLINK libspdk_bdev_passthru.so 00:02:18.780 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.780 CC module/bdev/xnvme/bdev_xnvme.o 00:02:18.780 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:18.780 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.780 CC module/bdev/ftl/bdev_ftl.o 00:02:18.780 CC module/bdev/aio/bdev_aio.o 00:02:18.780 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:19.038 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.038 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.038 CC module/bdev/aio/bdev_aio_rpc.o 00:02:19.038 LIB libspdk_bdev_xnvme.a 00:02:19.038 LIB libspdk_bdev_ftl.a 00:02:19.038 SO libspdk_bdev_xnvme.so.2.0 00:02:19.038 LIB libspdk_bdev_zone_block.a 00:02:19.038 SO libspdk_bdev_ftl.so.5.0 00:02:19.038 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:19.039 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:19.039 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:19.039 SO libspdk_bdev_zone_block.so.5.0 00:02:19.039 LIB libspdk_bdev_aio.a 00:02:19.039 SYMLINK libspdk_bdev_xnvme.so 00:02:19.039 SYMLINK libspdk_bdev_ftl.so 00:02:19.039 SO libspdk_bdev_aio.so.5.0 00:02:19.039 SYMLINK libspdk_bdev_zone_block.so 00:02:19.297 SYMLINK libspdk_bdev_aio.so 00:02:19.297 LIB libspdk_bdev_raid.a 00:02:19.297 LIB libspdk_bdev_iscsi.a 00:02:19.297 SO libspdk_bdev_iscsi.so.5.0 00:02:19.297 SO libspdk_bdev_raid.so.5.0 00:02:19.297 SYMLINK libspdk_bdev_iscsi.so 00:02:19.297 SYMLINK libspdk_bdev_raid.so 00:02:19.557 LIB libspdk_bdev_virtio.a 00:02:19.557 SO libspdk_bdev_virtio.so.5.0 00:02:19.557 SYMLINK libspdk_bdev_virtio.so 00:02:20.125 LIB libspdk_bdev_nvme.a 00:02:20.125 SO libspdk_bdev_nvme.so.6.0 00:02:20.125 SYMLINK libspdk_bdev_nvme.so 00:02:20.383 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.383 CC module/event/subsystems/vmd/vmd.o 00:02:20.383 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.383 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.383 CC module/event/subsystems/sock/sock.o 00:02:20.383 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.383 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.642 LIB libspdk_event_sock.a 00:02:20.642 LIB libspdk_event_vhost_blk.a 00:02:20.642 SO libspdk_event_sock.so.4.0 00:02:20.642 LIB libspdk_event_vmd.a 00:02:20.642 SO libspdk_event_vhost_blk.so.2.0 00:02:20.642 LIB libspdk_event_scheduler.a 00:02:20.642 LIB libspdk_event_iobuf.a 00:02:20.642 SO libspdk_event_scheduler.so.3.0 00:02:20.642 SO libspdk_event_iobuf.so.2.0 00:02:20.642 SO libspdk_event_vmd.so.5.0 00:02:20.642 SYMLINK libspdk_event_sock.so 00:02:20.642 SYMLINK libspdk_event_vhost_blk.so 00:02:20.642 SYMLINK libspdk_event_scheduler.so 00:02:20.642 SYMLINK libspdk_event_vmd.so 00:02:20.642 SYMLINK libspdk_event_iobuf.so 00:02:20.903 CC module/event/subsystems/accel/accel.o 00:02:20.903 LIB libspdk_event_accel.a 00:02:20.903 SO libspdk_event_accel.so.5.0 00:02:20.903 SYMLINK libspdk_event_accel.so 00:02:21.161 CC module/event/subsystems/bdev/bdev.o 00:02:21.161 LIB libspdk_event_bdev.a 00:02:21.161 SO libspdk_event_bdev.so.5.0 00:02:21.420 SYMLINK libspdk_event_bdev.so 00:02:21.420 CC module/event/subsystems/scsi/scsi.o 00:02:21.420 CC module/event/subsystems/nbd/nbd.o 00:02:21.420 CC module/event/subsystems/ublk/ublk.o 00:02:21.420 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.420 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.679 LIB libspdk_event_ublk.a 00:02:21.679 LIB libspdk_event_nbd.a 00:02:21.679 LIB libspdk_event_scsi.a 00:02:21.679 SO libspdk_event_ublk.so.2.0 00:02:21.679 SO libspdk_event_nbd.so.5.0 00:02:21.679 SO libspdk_event_scsi.so.5.0 00:02:21.679 SYMLINK libspdk_event_scsi.so 00:02:21.679 SYMLINK libspdk_event_ublk.so 00:02:21.679 SYMLINK libspdk_event_nbd.so 00:02:21.679 LIB libspdk_event_nvmf.a 00:02:21.679 SO libspdk_event_nvmf.so.5.0 00:02:21.679 SYMLINK libspdk_event_nvmf.so 00:02:21.679 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.679 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.938 LIB libspdk_event_vhost_scsi.a 00:02:21.938 LIB libspdk_event_iscsi.a 00:02:21.938 SO libspdk_event_vhost_scsi.so.2.0 00:02:21.938 SO libspdk_event_iscsi.so.5.0 00:02:21.938 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.938 SYMLINK libspdk_event_iscsi.so 00:02:21.938 SO libspdk.so.5.0 00:02:21.938 SYMLINK libspdk.so 00:02:22.197 CXX app/trace/trace.o 00:02:22.197 CC examples/nvme/hello_world/hello_world.o 00:02:22.197 CC examples/accel/perf/accel_perf.o 00:02:22.197 CC examples/sock/hello_world/hello_sock.o 00:02:22.197 CC examples/ioat/perf/perf.o 00:02:22.198 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.198 CC examples/blob/hello_world/hello_blob.o 00:02:22.198 CC test/accel/dif/dif.o 00:02:22.198 CC test/app/bdev_svc/bdev_svc.o 00:02:22.198 CC test/bdev/bdevio/bdevio.o 00:02:22.457 LINK bdev_svc 00:02:22.457 LINK ioat_perf 00:02:22.457 LINK hello_blob 00:02:22.457 LINK hello_world 00:02:22.457 LINK hello_bdev 00:02:22.457 LINK hello_sock 00:02:22.457 LINK spdk_trace 00:02:22.717 CC examples/ioat/verify/verify.o 00:02:22.717 LINK dif 00:02:22.717 CC examples/nvme/reconnect/reconnect.o 00:02:22.717 CC test/app/histogram_perf/histogram_perf.o 00:02:22.717 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:22.717 CC examples/blob/cli/blobcli.o 00:02:22.717 LINK accel_perf 00:02:22.717 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.717 LINK bdevio 00:02:22.717 CC app/trace_record/trace_record.o 00:02:22.717 LINK histogram_perf 00:02:22.717 LINK verify 00:02:22.976 CC app/nvmf_tgt/nvmf_main.o 00:02:22.976 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.976 CC examples/nvme/arbitration/arbitration.o 00:02:22.976 TEST_HEADER include/spdk/accel.h 00:02:22.976 TEST_HEADER include/spdk/accel_module.h 00:02:22.976 TEST_HEADER include/spdk/assert.h 00:02:22.976 TEST_HEADER include/spdk/barrier.h 00:02:22.976 TEST_HEADER include/spdk/base64.h 00:02:22.976 TEST_HEADER include/spdk/bdev.h 00:02:22.976 LINK reconnect 00:02:22.976 TEST_HEADER include/spdk/bdev_module.h 00:02:22.976 TEST_HEADER include/spdk/bdev_zone.h 00:02:22.976 TEST_HEADER include/spdk/bit_array.h 00:02:22.976 TEST_HEADER include/spdk/bit_pool.h 00:02:22.976 TEST_HEADER include/spdk/blob_bdev.h 00:02:22.976 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:22.976 TEST_HEADER include/spdk/blobfs.h 00:02:22.976 LINK spdk_trace_record 00:02:22.976 TEST_HEADER include/spdk/blob.h 00:02:22.976 TEST_HEADER include/spdk/conf.h 00:02:22.976 TEST_HEADER include/spdk/config.h 00:02:22.976 TEST_HEADER include/spdk/cpuset.h 00:02:22.976 TEST_HEADER include/spdk/crc16.h 00:02:22.976 TEST_HEADER include/spdk/crc32.h 00:02:22.976 TEST_HEADER include/spdk/crc64.h 00:02:22.976 TEST_HEADER include/spdk/dif.h 00:02:22.976 TEST_HEADER include/spdk/dma.h 00:02:22.976 TEST_HEADER include/spdk/endian.h 00:02:22.976 LINK nvme_fuzz 00:02:22.976 TEST_HEADER include/spdk/env_dpdk.h 00:02:22.976 TEST_HEADER include/spdk/env.h 00:02:22.976 TEST_HEADER include/spdk/event.h 00:02:22.976 TEST_HEADER include/spdk/fd_group.h 00:02:22.976 TEST_HEADER include/spdk/fd.h 00:02:22.976 TEST_HEADER include/spdk/file.h 00:02:22.976 TEST_HEADER include/spdk/ftl.h 00:02:22.976 TEST_HEADER include/spdk/gpt_spec.h 00:02:22.976 TEST_HEADER include/spdk/hexlify.h 00:02:22.976 TEST_HEADER include/spdk/histogram_data.h 00:02:22.976 TEST_HEADER include/spdk/idxd.h 00:02:22.976 TEST_HEADER include/spdk/idxd_spec.h 00:02:22.976 TEST_HEADER include/spdk/init.h 00:02:22.976 TEST_HEADER include/spdk/ioat.h 00:02:22.976 TEST_HEADER include/spdk/ioat_spec.h 00:02:22.976 TEST_HEADER include/spdk/iscsi_spec.h 00:02:22.976 TEST_HEADER include/spdk/json.h 00:02:22.976 TEST_HEADER include/spdk/jsonrpc.h 00:02:22.976 TEST_HEADER include/spdk/likely.h 00:02:22.976 CC test/blobfs/mkfs/mkfs.o 00:02:22.976 TEST_HEADER include/spdk/log.h 00:02:22.976 TEST_HEADER include/spdk/lvol.h 00:02:22.976 TEST_HEADER include/spdk/memory.h 00:02:22.976 TEST_HEADER include/spdk/mmio.h 00:02:22.976 TEST_HEADER include/spdk/nbd.h 00:02:22.976 TEST_HEADER include/spdk/notify.h 00:02:22.976 TEST_HEADER include/spdk/nvme.h 00:02:22.976 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.976 LINK nvmf_tgt 00:02:22.976 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.976 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.976 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.976 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.976 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.976 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.976 TEST_HEADER include/spdk/nvmf.h 00:02:22.976 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.976 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.976 TEST_HEADER include/spdk/opal.h 00:02:22.976 TEST_HEADER include/spdk/opal_spec.h 00:02:22.976 TEST_HEADER include/spdk/pci_ids.h 00:02:22.976 TEST_HEADER include/spdk/pipe.h 00:02:22.976 TEST_HEADER include/spdk/queue.h 00:02:22.976 TEST_HEADER include/spdk/reduce.h 00:02:22.976 TEST_HEADER include/spdk/rpc.h 00:02:22.976 TEST_HEADER include/spdk/scheduler.h 00:02:22.976 TEST_HEADER include/spdk/scsi.h 00:02:23.234 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.234 TEST_HEADER include/spdk/sock.h 00:02:23.234 TEST_HEADER include/spdk/stdinc.h 00:02:23.234 TEST_HEADER include/spdk/string.h 00:02:23.234 TEST_HEADER include/spdk/thread.h 00:02:23.234 TEST_HEADER include/spdk/trace.h 00:02:23.234 TEST_HEADER include/spdk/trace_parser.h 00:02:23.234 TEST_HEADER include/spdk/tree.h 00:02:23.234 TEST_HEADER include/spdk/ublk.h 00:02:23.234 TEST_HEADER include/spdk/util.h 00:02:23.234 TEST_HEADER include/spdk/uuid.h 00:02:23.234 TEST_HEADER include/spdk/version.h 00:02:23.234 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.234 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.234 TEST_HEADER include/spdk/vhost.h 00:02:23.234 TEST_HEADER include/spdk/vmd.h 00:02:23.234 TEST_HEADER include/spdk/xor.h 00:02:23.234 TEST_HEADER include/spdk/zipf.h 00:02:23.234 CXX test/cpp_headers/accel.o 00:02:23.234 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:23.234 LINK blobcli 00:02:23.234 CC test/dma/test_dma/test_dma.o 00:02:23.234 LINK mkfs 00:02:23.234 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.234 CXX test/cpp_headers/accel_module.o 00:02:23.234 LINK arbitration 00:02:23.234 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.234 CXX test/cpp_headers/assert.o 00:02:23.493 CXX test/cpp_headers/barrier.o 00:02:23.493 LINK nvme_manage 00:02:23.493 LINK iscsi_tgt 00:02:23.493 CC test/event/event_perf/event_perf.o 00:02:23.493 CC test/event/reactor/reactor.o 00:02:23.493 CXX test/cpp_headers/base64.o 00:02:23.493 LINK bdevperf 00:02:23.493 LINK test_dma 00:02:23.493 CC examples/nvme/hotplug/hotplug.o 00:02:23.493 CC test/lvol/esnap/esnap.o 00:02:23.754 LINK reactor 00:02:23.754 LINK event_perf 00:02:23.754 CXX test/cpp_headers/bdev.o 00:02:23.754 LINK mem_callbacks 00:02:23.754 CC app/spdk_tgt/spdk_tgt.o 00:02:23.754 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:23.754 CC examples/nvme/abort/abort.o 00:02:23.754 CC test/event/reactor_perf/reactor_perf.o 00:02:23.754 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:23.754 CXX test/cpp_headers/bdev_module.o 00:02:23.754 LINK hotplug 00:02:24.013 CC test/env/vtophys/vtophys.o 00:02:24.013 LINK cmb_copy 00:02:24.013 LINK reactor_perf 00:02:24.013 LINK spdk_tgt 00:02:24.013 LINK pmr_persistence 00:02:24.013 CXX test/cpp_headers/bdev_zone.o 00:02:24.013 LINK vtophys 00:02:24.013 CC test/event/app_repeat/app_repeat.o 00:02:24.013 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.273 CC test/event/scheduler/scheduler.o 00:02:24.273 LINK abort 00:02:24.273 CXX test/cpp_headers/bit_array.o 00:02:24.273 CC test/env/memory/memory_ut.o 00:02:24.273 CC app/spdk_lspci/spdk_lspci.o 00:02:24.273 CC app/spdk_nvme_perf/perf.o 00:02:24.273 LINK app_repeat 00:02:24.273 LINK env_dpdk_post_init 00:02:24.273 CXX test/cpp_headers/bit_pool.o 00:02:24.273 LINK spdk_lspci 00:02:24.273 LINK scheduler 00:02:24.273 CC app/spdk_nvme_identify/identify.o 00:02:24.273 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.533 CC examples/vmd/led/led.o 00:02:24.533 CXX test/cpp_headers/blob_bdev.o 00:02:24.533 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.533 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.533 LINK lsvmd 00:02:24.533 LINK iscsi_fuzz 00:02:24.533 LINK led 00:02:24.533 CXX test/cpp_headers/blobfs.o 00:02:24.533 CC app/spdk_top/spdk_top.o 00:02:24.533 CXX test/cpp_headers/blob.o 00:02:24.793 LINK spdk_nvme_discover 00:02:24.793 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.793 CXX test/cpp_headers/conf.o 00:02:24.793 CC examples/nvmf/nvmf/nvmf.o 00:02:24.793 CC examples/util/zipf/zipf.o 00:02:24.793 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.793 CXX test/cpp_headers/config.o 00:02:25.053 CXX test/cpp_headers/cpuset.o 00:02:25.053 CC examples/thread/thread/thread_ex.o 00:02:25.053 LINK zipf 00:02:25.053 LINK memory_ut 00:02:25.053 LINK spdk_nvme_perf 00:02:25.053 CXX test/cpp_headers/crc16.o 00:02:25.053 LINK nvmf 00:02:25.053 LINK spdk_nvme_identify 00:02:25.053 CC test/env/pci/pci_ut.o 00:02:25.053 CC app/vhost/vhost.o 00:02:25.053 CXX test/cpp_headers/crc32.o 00:02:25.053 LINK thread 00:02:25.311 CC examples/idxd/perf/perf.o 00:02:25.311 LINK vhost_fuzz 00:02:25.311 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.311 LINK vhost 00:02:25.311 CXX test/cpp_headers/crc64.o 00:02:25.311 CC app/spdk_dd/spdk_dd.o 00:02:25.311 CC test/app/jsoncat/jsoncat.o 00:02:25.311 CC test/app/stub/stub.o 00:02:25.311 CXX test/cpp_headers/dif.o 00:02:25.311 CXX test/cpp_headers/dma.o 00:02:25.311 LINK pci_ut 00:02:25.568 LINK interrupt_tgt 00:02:25.568 LINK spdk_top 00:02:25.568 LINK jsoncat 00:02:25.568 LINK idxd_perf 00:02:25.568 CXX test/cpp_headers/endian.o 00:02:25.568 CXX test/cpp_headers/env_dpdk.o 00:02:25.568 LINK stub 00:02:25.568 CXX test/cpp_headers/env.o 00:02:25.568 CXX test/cpp_headers/event.o 00:02:25.568 CXX test/cpp_headers/fd_group.o 00:02:25.568 CXX test/cpp_headers/fd.o 00:02:25.568 CXX test/cpp_headers/file.o 00:02:25.568 LINK spdk_dd 00:02:25.568 CXX test/cpp_headers/ftl.o 00:02:25.568 CC app/fio/nvme/fio_plugin.o 00:02:25.827 CC test/nvme/aer/aer.o 00:02:25.827 CC test/nvme/reset/reset.o 00:02:25.827 CC test/nvme/sgl/sgl.o 00:02:25.827 CC test/nvme/e2edp/nvme_dp.o 00:02:25.827 CC test/nvme/overhead/overhead.o 00:02:25.827 CC app/fio/bdev/fio_plugin.o 00:02:25.827 CXX test/cpp_headers/gpt_spec.o 00:02:25.827 CC test/nvme/err_injection/err_injection.o 00:02:25.827 LINK aer 00:02:26.085 CXX test/cpp_headers/hexlify.o 00:02:26.085 LINK err_injection 00:02:26.085 LINK nvme_dp 00:02:26.085 LINK reset 00:02:26.085 LINK sgl 00:02:26.085 LINK overhead 00:02:26.085 CXX test/cpp_headers/histogram_data.o 00:02:26.085 CXX test/cpp_headers/idxd.o 00:02:26.085 CC test/nvme/startup/startup.o 00:02:26.085 CC test/nvme/reserve/reserve.o 00:02:26.085 CC test/nvme/simple_copy/simple_copy.o 00:02:26.085 CC test/nvme/connect_stress/connect_stress.o 00:02:26.085 CC test/nvme/boot_partition/boot_partition.o 00:02:26.085 CXX test/cpp_headers/idxd_spec.o 00:02:26.343 LINK spdk_nvme 00:02:26.343 LINK startup 00:02:26.343 LINK spdk_bdev 00:02:26.343 CC test/rpc_client/rpc_client_test.o 00:02:26.343 LINK simple_copy 00:02:26.343 LINK connect_stress 00:02:26.343 LINK reserve 00:02:26.343 CXX test/cpp_headers/init.o 00:02:26.343 LINK boot_partition 00:02:26.343 CXX test/cpp_headers/ioat.o 00:02:26.343 CC test/nvme/compliance/nvme_compliance.o 00:02:26.343 CC test/thread/poller_perf/poller_perf.o 00:02:26.343 CXX test/cpp_headers/ioat_spec.o 00:02:26.343 LINK rpc_client_test 00:02:26.602 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.602 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.602 CXX test/cpp_headers/iscsi_spec.o 00:02:26.602 CC test/nvme/fdp/fdp.o 00:02:26.602 CC test/nvme/cuse/cuse.o 00:02:26.602 CXX test/cpp_headers/json.o 00:02:26.602 LINK poller_perf 00:02:26.602 CXX test/cpp_headers/jsonrpc.o 00:02:26.602 CXX test/cpp_headers/likely.o 00:02:26.602 CXX test/cpp_headers/log.o 00:02:26.602 CXX test/cpp_headers/lvol.o 00:02:26.602 LINK doorbell_aers 00:02:26.602 LINK fused_ordering 00:02:26.602 CXX test/cpp_headers/memory.o 00:02:26.602 CXX test/cpp_headers/mmio.o 00:02:26.868 CXX test/cpp_headers/nbd.o 00:02:26.868 CXX test/cpp_headers/notify.o 00:02:26.868 CXX test/cpp_headers/nvme.o 00:02:26.868 LINK nvme_compliance 00:02:26.868 CXX test/cpp_headers/nvme_intel.o 00:02:26.868 LINK fdp 00:02:26.868 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.868 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.868 CXX test/cpp_headers/nvme_spec.o 00:02:26.868 CXX test/cpp_headers/nvme_zns.o 00:02:26.868 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.868 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.868 CXX test/cpp_headers/nvmf.o 00:02:26.868 CXX test/cpp_headers/nvmf_spec.o 00:02:26.868 CXX test/cpp_headers/nvmf_transport.o 00:02:26.868 CXX test/cpp_headers/opal.o 00:02:26.868 CXX test/cpp_headers/opal_spec.o 00:02:26.868 CXX test/cpp_headers/pci_ids.o 00:02:26.868 CXX test/cpp_headers/pipe.o 00:02:27.139 CXX test/cpp_headers/queue.o 00:02:27.139 CXX test/cpp_headers/reduce.o 00:02:27.139 CXX test/cpp_headers/rpc.o 00:02:27.139 CXX test/cpp_headers/scheduler.o 00:02:27.139 CXX test/cpp_headers/scsi.o 00:02:27.139 CXX test/cpp_headers/scsi_spec.o 00:02:27.139 CXX test/cpp_headers/sock.o 00:02:27.139 CXX test/cpp_headers/stdinc.o 00:02:27.139 CXX test/cpp_headers/string.o 00:02:27.139 CXX test/cpp_headers/thread.o 00:02:27.139 CXX test/cpp_headers/trace.o 00:02:27.139 CXX test/cpp_headers/trace_parser.o 00:02:27.139 CXX test/cpp_headers/tree.o 00:02:27.139 CXX test/cpp_headers/ublk.o 00:02:27.139 CXX test/cpp_headers/util.o 00:02:27.139 CXX test/cpp_headers/uuid.o 00:02:27.139 CXX test/cpp_headers/version.o 00:02:27.139 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.139 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.139 CXX test/cpp_headers/vhost.o 00:02:27.139 CXX test/cpp_headers/vmd.o 00:02:27.398 CXX test/cpp_headers/xor.o 00:02:27.398 CXX test/cpp_headers/zipf.o 00:02:27.398 LINK cuse 00:02:27.966 LINK esnap 00:02:28.226 00:02:28.226 real 0m46.538s 00:02:28.226 user 4m42.535s 00:02:28.226 sys 0m58.239s 00:02:28.226 14:03:29 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:28.226 ************************************ 00:02:28.226 END TEST make 00:02:28.226 ************************************ 00:02:28.226 14:03:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.226 14:03:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:28.226 14:03:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:28.226 14:03:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:28.226 14:03:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:28.226 14:03:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:28.226 14:03:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:28.226 14:03:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:28.226 14:03:29 -- scripts/common.sh@335 -- # IFS=.-: 00:02:28.226 14:03:29 -- scripts/common.sh@335 -- # read -ra ver1 00:02:28.226 14:03:29 -- scripts/common.sh@336 -- # IFS=.-: 00:02:28.226 14:03:29 -- scripts/common.sh@336 -- # read -ra ver2 00:02:28.226 14:03:29 -- scripts/common.sh@337 -- # local 'op=<' 00:02:28.226 14:03:29 -- scripts/common.sh@339 -- # ver1_l=2 00:02:28.226 14:03:29 -- scripts/common.sh@340 -- # ver2_l=1 00:02:28.226 14:03:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:28.226 14:03:29 -- scripts/common.sh@343 -- # case "$op" in 00:02:28.226 14:03:29 -- scripts/common.sh@344 -- # : 1 00:02:28.226 14:03:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:28.226 14:03:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:28.226 14:03:29 -- scripts/common.sh@364 -- # decimal 1 00:02:28.226 14:03:29 -- scripts/common.sh@352 -- # local d=1 00:02:28.226 14:03:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:28.226 14:03:29 -- scripts/common.sh@354 -- # echo 1 00:02:28.226 14:03:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:28.226 14:03:29 -- scripts/common.sh@365 -- # decimal 2 00:02:28.226 14:03:29 -- scripts/common.sh@352 -- # local d=2 00:02:28.226 14:03:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:28.226 14:03:29 -- scripts/common.sh@354 -- # echo 2 00:02:28.226 14:03:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:28.226 14:03:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:28.226 14:03:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:28.226 14:03:29 -- scripts/common.sh@367 -- # return 0 00:02:28.226 14:03:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:28.226 14:03:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:28.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.226 --rc genhtml_branch_coverage=1 00:02:28.226 --rc genhtml_function_coverage=1 00:02:28.226 --rc genhtml_legend=1 00:02:28.226 --rc geninfo_all_blocks=1 00:02:28.226 --rc geninfo_unexecuted_blocks=1 00:02:28.226 00:02:28.226 ' 00:02:28.226 14:03:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:28.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.226 --rc genhtml_branch_coverage=1 00:02:28.226 --rc genhtml_function_coverage=1 00:02:28.226 --rc genhtml_legend=1 00:02:28.226 --rc geninfo_all_blocks=1 00:02:28.226 --rc geninfo_unexecuted_blocks=1 00:02:28.226 00:02:28.226 ' 00:02:28.226 14:03:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:28.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.226 --rc genhtml_branch_coverage=1 00:02:28.226 --rc genhtml_function_coverage=1 00:02:28.226 --rc genhtml_legend=1 00:02:28.226 --rc geninfo_all_blocks=1 00:02:28.226 --rc geninfo_unexecuted_blocks=1 00:02:28.226 00:02:28.226 ' 00:02:28.226 14:03:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:28.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:28.226 --rc genhtml_branch_coverage=1 00:02:28.226 --rc genhtml_function_coverage=1 00:02:28.226 --rc genhtml_legend=1 00:02:28.226 --rc geninfo_all_blocks=1 00:02:28.226 --rc geninfo_unexecuted_blocks=1 00:02:28.226 00:02:28.226 ' 00:02:28.226 14:03:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:28.226 14:03:29 -- nvmf/common.sh@7 -- # uname -s 00:02:28.226 14:03:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:28.226 14:03:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:28.226 14:03:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:28.226 14:03:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:28.226 14:03:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:28.226 14:03:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:28.226 14:03:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:28.226 14:03:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:28.226 14:03:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:28.226 14:03:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:28.226 14:03:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a56602d-4ef1-47a7-b8e8-8d1422718f64 00:02:28.226 14:03:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=9a56602d-4ef1-47a7-b8e8-8d1422718f64 00:02:28.226 14:03:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:28.226 14:03:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:28.226 14:03:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:28.226 14:03:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:28.226 14:03:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:28.226 14:03:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.226 14:03:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.226 14:03:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.226 14:03:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.226 14:03:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.226 14:03:29 -- paths/export.sh@5 -- # export PATH 00:02:28.226 14:03:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.226 14:03:29 -- nvmf/common.sh@46 -- # : 0 00:02:28.226 14:03:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:28.226 14:03:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:28.226 14:03:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:28.226 14:03:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:28.226 14:03:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:28.226 14:03:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:28.226 14:03:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:28.226 14:03:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:28.226 14:03:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:28.226 14:03:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:28.226 14:03:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:28.226 14:03:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:28.227 14:03:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:28.227 14:03:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:28.227 14:03:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:28.227 14:03:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:28.227 14:03:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:28.227 14:03:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:28.227 14:03:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:28.227 14:03:29 -- spdk/autotest.sh@48 -- # udevadm_pid=48150 00:02:28.227 14:03:29 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:28.227 14:03:29 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:28.487 14:03:29 -- spdk/autotest.sh@54 -- # echo 48166 00:02:28.487 14:03:29 -- spdk/autotest.sh@56 -- # echo 48172 00:02:28.487 14:03:29 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:28.487 14:03:29 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.487 14:03:29 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:28.487 14:03:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:28.487 14:03:29 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:28.487 14:03:29 -- common/autotest_common.sh@10 -- # set +x 00:02:28.487 14:03:29 -- spdk/autotest.sh@70 -- # create_test_list 00:02:28.487 14:03:29 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:28.487 14:03:29 -- common/autotest_common.sh@10 -- # set +x 00:02:28.487 14:03:29 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:28.487 14:03:29 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:28.487 14:03:29 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:28.487 14:03:29 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:28.487 14:03:29 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:28.487 14:03:29 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:28.487 14:03:29 -- common/autotest_common.sh@1450 -- # uname 00:02:28.487 14:03:29 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:28.487 14:03:29 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:28.487 14:03:29 -- common/autotest_common.sh@1470 -- # uname 00:02:28.487 14:03:29 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:28.487 14:03:29 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:28.487 14:03:29 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:28.487 lcov: LCOV version 1.15 00:02:28.487 14:03:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:35.057 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:35.057 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:35.057 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:35.057 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:35.057 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:35.057 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:57.030 14:03:55 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:57.030 14:03:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:57.030 14:03:55 -- common/autotest_common.sh@10 -- # set +x 00:02:57.030 14:03:55 -- spdk/autotest.sh@89 -- # rm -f 00:02:57.030 14:03:55 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:02:57.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.030 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:02:57.030 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:02:57.030 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:02:57.030 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:02:57.030 14:03:56 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:02:57.030 14:03:56 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:57.030 14:03:56 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:57.030 14:03:56 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.030 14:03:56 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:02:57.030 14:03:56 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:57.030 14:03:56 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.030 14:03:56 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:02:57.030 14:03:56 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:02:57.030 14:03:56 -- spdk/autotest.sh@108 -- # grep -v p 00:02:57.030 14:03:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:57.030 14:03:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:57.030 14:03:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:02:57.030 14:03:56 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:57.030 14:03:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:57.030 No valid GPT data, bailing 00:02:57.030 14:03:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:57.030 14:03:56 -- scripts/common.sh@393 -- # pt= 00:02:57.030 14:03:56 -- scripts/common.sh@394 -- # return 1 00:02:57.030 14:03:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:57.030 1+0 records in 00:02:57.030 1+0 records out 00:02:57.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236321 s, 44.4 MB/s 00:02:57.030 14:03:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:57.030 14:03:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:57.030 14:03:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:02:57.031 14:03:56 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:02:57.031 14:03:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:57.031 No valid GPT data, bailing 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # pt= 00:02:57.031 14:03:56 -- scripts/common.sh@394 -- # return 1 00:02:57.031 14:03:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:57.031 1+0 records in 00:02:57.031 1+0 records out 00:02:57.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453219 s, 231 MB/s 00:02:57.031 14:03:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:57.031 14:03:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:57.031 14:03:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:02:57.031 14:03:56 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:02:57.031 14:03:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:57.031 No valid GPT data, bailing 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # pt= 00:02:57.031 14:03:56 -- scripts/common.sh@394 -- # return 1 00:02:57.031 14:03:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:57.031 1+0 records in 00:02:57.031 1+0 records out 00:02:57.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044941 s, 233 MB/s 00:02:57.031 14:03:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:57.031 14:03:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:57.031 14:03:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:02:57.031 14:03:56 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:02:57.031 14:03:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:02:57.031 No valid GPT data, bailing 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # pt= 00:02:57.031 14:03:56 -- scripts/common.sh@394 -- # return 1 00:02:57.031 14:03:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:02:57.031 1+0 records in 00:02:57.031 1+0 records out 00:02:57.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00570176 s, 184 MB/s 00:02:57.031 14:03:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:57.031 14:03:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:57.031 14:03:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:02:57.031 14:03:56 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:02:57.031 14:03:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:02:57.031 No valid GPT data, bailing 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # pt= 00:02:57.031 14:03:56 -- scripts/common.sh@394 -- # return 1 00:02:57.031 14:03:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:02:57.031 1+0 records in 00:02:57.031 1+0 records out 00:02:57.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552457 s, 190 MB/s 00:02:57.031 14:03:56 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:57.031 14:03:56 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:02:57.031 14:03:56 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:02:57.031 14:03:56 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:02:57.031 14:03:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:02:57.031 No valid GPT data, bailing 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:02:57.031 14:03:56 -- scripts/common.sh@393 -- # pt= 00:02:57.031 14:03:56 -- scripts/common.sh@394 -- # return 1 00:02:57.031 14:03:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:02:57.031 1+0 records in 00:02:57.031 1+0 records out 00:02:57.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442857 s, 237 MB/s 00:02:57.031 14:03:56 -- spdk/autotest.sh@116 -- # sync 00:02:57.031 14:03:57 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:57.031 14:03:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:57.031 14:03:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:57.031 14:03:58 -- spdk/autotest.sh@122 -- # uname -s 00:02:57.031 14:03:58 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:02:57.031 14:03:58 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:02:57.031 14:03:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:57.031 14:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:57.031 14:03:58 -- common/autotest_common.sh@10 -- # set +x 00:02:57.292 ************************************ 00:02:57.292 START TEST setup.sh 00:02:57.292 ************************************ 00:02:57.292 14:03:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:02:57.292 * Looking for test storage... 00:02:57.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:02:57.292 14:03:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:57.292 14:03:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:57.292 14:03:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:57.292 14:03:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:57.292 14:03:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:57.292 14:03:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:57.292 14:03:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:57.292 14:03:58 -- scripts/common.sh@335 -- # IFS=.-: 00:02:57.292 14:03:58 -- scripts/common.sh@335 -- # read -ra ver1 00:02:57.292 14:03:58 -- scripts/common.sh@336 -- # IFS=.-: 00:02:57.292 14:03:58 -- scripts/common.sh@336 -- # read -ra ver2 00:02:57.292 14:03:58 -- scripts/common.sh@337 -- # local 'op=<' 00:02:57.292 14:03:58 -- scripts/common.sh@339 -- # ver1_l=2 00:02:57.292 14:03:58 -- scripts/common.sh@340 -- # ver2_l=1 00:02:57.292 14:03:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:57.292 14:03:58 -- scripts/common.sh@343 -- # case "$op" in 00:02:57.292 14:03:58 -- scripts/common.sh@344 -- # : 1 00:02:57.292 14:03:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:57.292 14:03:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.292 14:03:58 -- scripts/common.sh@364 -- # decimal 1 00:02:57.292 14:03:58 -- scripts/common.sh@352 -- # local d=1 00:02:57.292 14:03:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:57.292 14:03:58 -- scripts/common.sh@354 -- # echo 1 00:02:57.292 14:03:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:57.292 14:03:58 -- scripts/common.sh@365 -- # decimal 2 00:02:57.292 14:03:58 -- scripts/common.sh@352 -- # local d=2 00:02:57.292 14:03:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:57.292 14:03:58 -- scripts/common.sh@354 -- # echo 2 00:02:57.292 14:03:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:57.292 14:03:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:57.292 14:03:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:57.292 14:03:58 -- scripts/common.sh@367 -- # return 0 00:02:57.292 14:03:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:57.292 14:03:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:57.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.292 --rc genhtml_branch_coverage=1 00:02:57.293 --rc genhtml_function_coverage=1 00:02:57.293 --rc genhtml_legend=1 00:02:57.293 --rc geninfo_all_blocks=1 00:02:57.293 --rc geninfo_unexecuted_blocks=1 00:02:57.293 00:02:57.293 ' 00:02:57.293 14:03:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:57.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.293 --rc genhtml_branch_coverage=1 00:02:57.293 --rc genhtml_function_coverage=1 00:02:57.293 --rc genhtml_legend=1 00:02:57.293 --rc geninfo_all_blocks=1 00:02:57.293 --rc geninfo_unexecuted_blocks=1 00:02:57.293 00:02:57.293 ' 00:02:57.293 14:03:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:57.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.293 --rc genhtml_branch_coverage=1 00:02:57.293 --rc genhtml_function_coverage=1 00:02:57.293 --rc genhtml_legend=1 00:02:57.293 --rc geninfo_all_blocks=1 00:02:57.293 --rc geninfo_unexecuted_blocks=1 00:02:57.293 00:02:57.293 ' 00:02:57.293 14:03:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:57.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.293 --rc genhtml_branch_coverage=1 00:02:57.293 --rc genhtml_function_coverage=1 00:02:57.293 --rc genhtml_legend=1 00:02:57.293 --rc geninfo_all_blocks=1 00:02:57.293 --rc geninfo_unexecuted_blocks=1 00:02:57.293 00:02:57.293 ' 00:02:57.293 14:03:58 -- setup/test-setup.sh@10 -- # uname -s 00:02:57.293 14:03:58 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:57.293 14:03:58 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:02:57.293 14:03:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:57.293 14:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:57.293 14:03:58 -- common/autotest_common.sh@10 -- # set +x 00:02:57.293 ************************************ 00:02:57.293 START TEST acl 00:02:57.293 ************************************ 00:02:57.293 14:03:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:02:57.293 * Looking for test storage... 00:02:57.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:02:57.293 14:03:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:57.293 14:03:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:57.293 14:03:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:57.555 14:03:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:57.555 14:03:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:57.555 14:03:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:57.555 14:03:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:57.555 14:03:58 -- scripts/common.sh@335 -- # IFS=.-: 00:02:57.555 14:03:58 -- scripts/common.sh@335 -- # read -ra ver1 00:02:57.555 14:03:58 -- scripts/common.sh@336 -- # IFS=.-: 00:02:57.555 14:03:58 -- scripts/common.sh@336 -- # read -ra ver2 00:02:57.555 14:03:58 -- scripts/common.sh@337 -- # local 'op=<' 00:02:57.555 14:03:58 -- scripts/common.sh@339 -- # ver1_l=2 00:02:57.555 14:03:58 -- scripts/common.sh@340 -- # ver2_l=1 00:02:57.555 14:03:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:57.555 14:03:58 -- scripts/common.sh@343 -- # case "$op" in 00:02:57.555 14:03:58 -- scripts/common.sh@344 -- # : 1 00:02:57.555 14:03:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:57.555 14:03:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.555 14:03:58 -- scripts/common.sh@364 -- # decimal 1 00:02:57.555 14:03:58 -- scripts/common.sh@352 -- # local d=1 00:02:57.555 14:03:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:57.555 14:03:58 -- scripts/common.sh@354 -- # echo 1 00:02:57.555 14:03:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:57.555 14:03:58 -- scripts/common.sh@365 -- # decimal 2 00:02:57.555 14:03:58 -- scripts/common.sh@352 -- # local d=2 00:02:57.555 14:03:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:57.555 14:03:58 -- scripts/common.sh@354 -- # echo 2 00:02:57.555 14:03:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:57.555 14:03:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:57.555 14:03:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:57.555 14:03:58 -- scripts/common.sh@367 -- # return 0 00:02:57.555 14:03:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:57.555 14:03:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.555 --rc genhtml_branch_coverage=1 00:02:57.555 --rc genhtml_function_coverage=1 00:02:57.555 --rc genhtml_legend=1 00:02:57.555 --rc geninfo_all_blocks=1 00:02:57.555 --rc geninfo_unexecuted_blocks=1 00:02:57.555 00:02:57.555 ' 00:02:57.555 14:03:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.555 --rc genhtml_branch_coverage=1 00:02:57.555 --rc genhtml_function_coverage=1 00:02:57.555 --rc genhtml_legend=1 00:02:57.555 --rc geninfo_all_blocks=1 00:02:57.555 --rc geninfo_unexecuted_blocks=1 00:02:57.555 00:02:57.555 ' 00:02:57.555 14:03:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.555 --rc genhtml_branch_coverage=1 00:02:57.555 --rc genhtml_function_coverage=1 00:02:57.555 --rc genhtml_legend=1 00:02:57.555 --rc geninfo_all_blocks=1 00:02:57.555 --rc geninfo_unexecuted_blocks=1 00:02:57.555 00:02:57.555 ' 00:02:57.555 14:03:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.555 --rc genhtml_branch_coverage=1 00:02:57.555 --rc genhtml_function_coverage=1 00:02:57.555 --rc genhtml_legend=1 00:02:57.555 --rc geninfo_all_blocks=1 00:02:57.555 --rc geninfo_unexecuted_blocks=1 00:02:57.555 00:02:57.555 ' 00:02:57.555 14:03:58 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:57.555 14:03:58 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:02:57.555 14:03:58 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:02:57.555 14:03:58 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:02:57.555 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.555 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:57.555 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:02:57.555 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:57.555 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.555 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.555 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.556 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.556 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:02:57.556 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:02:57.556 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.556 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:02:57.556 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:02:57.556 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.556 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:02:57.556 14:03:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:02:57.556 14:03:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:57.556 14:03:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:02:57.556 14:03:58 -- setup/acl.sh@12 -- # devs=() 00:02:57.556 14:03:58 -- setup/acl.sh@12 -- # declare -a devs 00:02:57.556 14:03:58 -- setup/acl.sh@13 -- # drivers=() 00:02:57.556 14:03:58 -- setup/acl.sh@13 -- # declare -A drivers 00:02:57.556 14:03:58 -- setup/acl.sh@51 -- # setup reset 00:02:57.556 14:03:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.556 14:03:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:02:58.501 14:03:59 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:58.501 14:03:59 -- setup/acl.sh@16 -- # local dev driver 00:02:58.501 14:03:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.501 14:03:59 -- setup/acl.sh@15 -- # setup output status 00:02:58.501 14:03:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.501 14:03:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:58.501 Hugepages 00:02:58.501 node hugesize free / total 00:02:58.501 14:03:59 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.501 14:03:59 -- setup/acl.sh@19 -- # continue 00:02:58.501 14:03:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.501 00:02:58.501 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.501 14:03:59 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.501 14:03:59 -- setup/acl.sh@19 -- # continue 00:02:58.501 14:03:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.762 14:03:59 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:02:58.762 14:03:59 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:02:58.762 14:03:59 -- setup/acl.sh@20 -- # continue 00:02:58.763 14:03:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.763 14:04:00 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:02:58.763 14:04:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.763 14:04:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:02:58.763 14:04:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.763 14:04:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.763 14:04:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.763 14:04:00 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:02:58.763 14:04:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.763 14:04:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:02:58.763 14:04:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.763 14:04:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.763 14:04:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.763 14:04:00 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:02:58.763 14:04:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.763 14:04:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:02:58.763 14:04:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.763 14:04:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.763 14:04:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.024 14:04:00 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:02:59.024 14:04:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:59.024 14:04:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:02:59.024 14:04:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:59.024 14:04:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:59.024 14:04:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.024 14:04:00 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:02:59.024 14:04:00 -- setup/acl.sh@54 -- # run_test denied denied 00:02:59.024 14:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.024 14:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.024 14:04:00 -- common/autotest_common.sh@10 -- # set +x 00:02:59.024 ************************************ 00:02:59.024 START TEST denied 00:02:59.024 ************************************ 00:02:59.024 14:04:00 -- common/autotest_common.sh@1114 -- # denied 00:02:59.024 14:04:00 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:02:59.024 14:04:00 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:02:59.024 14:04:00 -- setup/acl.sh@38 -- # setup output config 00:02:59.024 14:04:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.024 14:04:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:02:59.965 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:02:59.965 14:04:01 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:02:59.965 14:04:01 -- setup/acl.sh@28 -- # local dev driver 00:02:59.965 14:04:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.965 14:04:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:02:59.965 14:04:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:02:59.965 14:04:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.965 14:04:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.965 14:04:01 -- setup/acl.sh@41 -- # setup reset 00:02:59.965 14:04:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.965 14:04:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:06.556 00:03:06.556 real 0m6.921s 00:03:06.556 user 0m0.677s 00:03:06.556 sys 0m1.080s 00:03:06.556 14:04:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:06.556 ************************************ 00:03:06.556 END TEST denied 00:03:06.556 ************************************ 00:03:06.556 14:04:07 -- common/autotest_common.sh@10 -- # set +x 00:03:06.556 14:04:07 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:06.556 14:04:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.556 14:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.556 14:04:07 -- common/autotest_common.sh@10 -- # set +x 00:03:06.556 ************************************ 00:03:06.556 START TEST allowed 00:03:06.556 ************************************ 00:03:06.556 14:04:07 -- common/autotest_common.sh@1114 -- # allowed 00:03:06.556 14:04:07 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:06.556 14:04:07 -- setup/acl.sh@45 -- # setup output config 00:03:06.556 14:04:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.556 14:04:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:06.556 14:04:07 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:07.128 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:07.128 14:04:08 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:07.128 14:04:08 -- setup/acl.sh@28 -- # local dev driver 00:03:07.128 14:04:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.128 14:04:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:07.128 14:04:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:07.128 14:04:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.128 14:04:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.128 14:04:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.128 14:04:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:03:07.128 14:04:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:03:07.128 14:04:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.128 14:04:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.128 14:04:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:07.128 14:04:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:03:07.128 14:04:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:03:07.128 14:04:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:07.128 14:04:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:07.128 14:04:08 -- setup/acl.sh@48 -- # setup reset 00:03:07.128 14:04:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.128 14:04:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:08.074 ************************************ 00:03:08.074 END TEST allowed 00:03:08.074 ************************************ 00:03:08.074 00:03:08.074 real 0m2.075s 00:03:08.074 user 0m0.852s 00:03:08.074 sys 0m0.985s 00:03:08.074 14:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:08.074 14:04:09 -- common/autotest_common.sh@10 -- # set +x 00:03:08.074 ************************************ 00:03:08.074 END TEST acl 00:03:08.074 ************************************ 00:03:08.074 00:03:08.074 real 0m10.710s 00:03:08.074 user 0m2.221s 00:03:08.074 sys 0m2.938s 00:03:08.074 14:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:08.074 14:04:09 -- common/autotest_common.sh@10 -- # set +x 00:03:08.074 14:04:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:08.074 14:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.074 14:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.074 14:04:09 -- common/autotest_common.sh@10 -- # set +x 00:03:08.074 ************************************ 00:03:08.074 START TEST hugepages 00:03:08.074 ************************************ 00:03:08.074 14:04:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:08.074 * Looking for test storage... 00:03:08.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:08.074 14:04:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:08.074 14:04:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:08.074 14:04:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:08.343 14:04:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:08.343 14:04:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:08.343 14:04:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:08.343 14:04:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:08.343 14:04:09 -- scripts/common.sh@335 -- # IFS=.-: 00:03:08.343 14:04:09 -- scripts/common.sh@335 -- # read -ra ver1 00:03:08.343 14:04:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.343 14:04:09 -- scripts/common.sh@336 -- # read -ra ver2 00:03:08.343 14:04:09 -- scripts/common.sh@337 -- # local 'op=<' 00:03:08.343 14:04:09 -- scripts/common.sh@339 -- # ver1_l=2 00:03:08.343 14:04:09 -- scripts/common.sh@340 -- # ver2_l=1 00:03:08.343 14:04:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:08.343 14:04:09 -- scripts/common.sh@343 -- # case "$op" in 00:03:08.343 14:04:09 -- scripts/common.sh@344 -- # : 1 00:03:08.343 14:04:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:08.343 14:04:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.343 14:04:09 -- scripts/common.sh@364 -- # decimal 1 00:03:08.343 14:04:09 -- scripts/common.sh@352 -- # local d=1 00:03:08.343 14:04:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.343 14:04:09 -- scripts/common.sh@354 -- # echo 1 00:03:08.343 14:04:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:08.343 14:04:09 -- scripts/common.sh@365 -- # decimal 2 00:03:08.343 14:04:09 -- scripts/common.sh@352 -- # local d=2 00:03:08.343 14:04:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.343 14:04:09 -- scripts/common.sh@354 -- # echo 2 00:03:08.343 14:04:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:08.343 14:04:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:08.343 14:04:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:08.343 14:04:09 -- scripts/common.sh@367 -- # return 0 00:03:08.343 14:04:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.343 14:04:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:08.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.343 --rc genhtml_branch_coverage=1 00:03:08.343 --rc genhtml_function_coverage=1 00:03:08.343 --rc genhtml_legend=1 00:03:08.343 --rc geninfo_all_blocks=1 00:03:08.343 --rc geninfo_unexecuted_blocks=1 00:03:08.343 00:03:08.343 ' 00:03:08.343 14:04:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:08.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.343 --rc genhtml_branch_coverage=1 00:03:08.344 --rc genhtml_function_coverage=1 00:03:08.344 --rc genhtml_legend=1 00:03:08.344 --rc geninfo_all_blocks=1 00:03:08.344 --rc geninfo_unexecuted_blocks=1 00:03:08.344 00:03:08.344 ' 00:03:08.344 14:04:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:08.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.344 --rc genhtml_branch_coverage=1 00:03:08.344 --rc genhtml_function_coverage=1 00:03:08.344 --rc genhtml_legend=1 00:03:08.344 --rc geninfo_all_blocks=1 00:03:08.344 --rc geninfo_unexecuted_blocks=1 00:03:08.344 00:03:08.344 ' 00:03:08.344 14:04:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:08.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.344 --rc genhtml_branch_coverage=1 00:03:08.344 --rc genhtml_function_coverage=1 00:03:08.344 --rc genhtml_legend=1 00:03:08.344 --rc geninfo_all_blocks=1 00:03:08.344 --rc geninfo_unexecuted_blocks=1 00:03:08.344 00:03:08.344 ' 00:03:08.344 14:04:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:08.344 14:04:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:08.344 14:04:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:08.344 14:04:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:08.344 14:04:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:08.344 14:04:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:08.344 14:04:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:08.344 14:04:09 -- setup/common.sh@18 -- # local node= 00:03:08.344 14:04:09 -- setup/common.sh@19 -- # local var val 00:03:08.344 14:04:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.344 14:04:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.344 14:04:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.344 14:04:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.344 14:04:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.344 14:04:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 5775428 kB' 'MemAvailable: 7331088 kB' 'Buffers: 2684 kB' 'Cached: 1768808 kB' 'SwapCached: 0 kB' 'Active: 465356 kB' 'Inactive: 1421736 kB' 'Active(anon): 126132 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421736 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 117280 kB' 'Mapped: 50712 kB' 'Shmem: 10532 kB' 'KReclaimable: 63700 kB' 'Slab: 162252 kB' 'SReclaimable: 63700 kB' 'SUnreclaim: 98552 kB' 'KernelStack: 6512 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12409988 kB' 'Committed_AS: 320556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.344 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.344 14:04:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # continue 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.345 14:04:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.345 14:04:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.345 14:04:09 -- setup/common.sh@33 -- # echo 2048 00:03:08.345 14:04:09 -- setup/common.sh@33 -- # return 0 00:03:08.345 14:04:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:08.345 14:04:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:08.345 14:04:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:08.345 14:04:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:08.345 14:04:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:08.345 14:04:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:08.345 14:04:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:08.345 14:04:09 -- setup/hugepages.sh@207 -- # get_nodes 00:03:08.345 14:04:09 -- setup/hugepages.sh@27 -- # local node 00:03:08.345 14:04:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.345 14:04:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:08.345 14:04:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:08.345 14:04:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.345 14:04:09 -- setup/hugepages.sh@208 -- # clear_hp 00:03:08.345 14:04:09 -- setup/hugepages.sh@37 -- # local node hp 00:03:08.345 14:04:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.345 14:04:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.345 14:04:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.345 14:04:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.345 14:04:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.345 14:04:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.345 14:04:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.345 14:04:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:08.345 14:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.345 14:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.345 14:04:09 -- common/autotest_common.sh@10 -- # set +x 00:03:08.345 ************************************ 00:03:08.345 START TEST default_setup 00:03:08.345 ************************************ 00:03:08.345 14:04:09 -- common/autotest_common.sh@1114 -- # default_setup 00:03:08.345 14:04:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:08.345 14:04:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.345 14:04:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:08.345 14:04:09 -- setup/hugepages.sh@51 -- # shift 00:03:08.345 14:04:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:08.345 14:04:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.345 14:04:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.345 14:04:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.345 14:04:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:08.345 14:04:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:08.345 14:04:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.345 14:04:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.345 14:04:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:08.345 14:04:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.345 14:04:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.345 14:04:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:08.345 14:04:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.345 14:04:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:08.345 14:04:09 -- setup/hugepages.sh@73 -- # return 0 00:03:08.345 14:04:09 -- setup/hugepages.sh@137 -- # setup output 00:03:08.345 14:04:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.345 14:04:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:09.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:09.344 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:09.344 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:09.344 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:09.344 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:09.609 14:04:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:09.609 14:04:10 -- setup/hugepages.sh@89 -- # local node 00:03:09.609 14:04:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.609 14:04:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.609 14:04:10 -- setup/hugepages.sh@92 -- # local surp 00:03:09.609 14:04:10 -- setup/hugepages.sh@93 -- # local resv 00:03:09.609 14:04:10 -- setup/hugepages.sh@94 -- # local anon 00:03:09.609 14:04:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.609 14:04:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.609 14:04:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.609 14:04:10 -- setup/common.sh@18 -- # local node= 00:03:09.609 14:04:10 -- setup/common.sh@19 -- # local var val 00:03:09.609 14:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.609 14:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.609 14:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.609 14:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.609 14:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.609 14:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.609 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.609 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7887232 kB' 'MemAvailable: 9442700 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 467024 kB' 'Inactive: 1421752 kB' 'Active(anon): 127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421752 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118932 kB' 'Mapped: 50836 kB' 'Shmem: 10496 kB' 'KReclaimable: 63288 kB' 'Slab: 161980 kB' 'SReclaimable: 63288 kB' 'SUnreclaim: 98692 kB' 'KernelStack: 6544 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.610 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.610 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.611 14:04:10 -- setup/common.sh@33 -- # echo 0 00:03:09.611 14:04:10 -- setup/common.sh@33 -- # return 0 00:03:09.611 14:04:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:09.611 14:04:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.611 14:04:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.611 14:04:10 -- setup/common.sh@18 -- # local node= 00:03:09.611 14:04:10 -- setup/common.sh@19 -- # local var val 00:03:09.611 14:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.611 14:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.611 14:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.611 14:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.611 14:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.611 14:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7887232 kB' 'MemAvailable: 9442708 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466712 kB' 'Inactive: 1421760 kB' 'Active(anon): 127488 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118624 kB' 'Mapped: 50828 kB' 'Shmem: 10492 kB' 'KReclaimable: 63288 kB' 'Slab: 161976 kB' 'SReclaimable: 63288 kB' 'SUnreclaim: 98688 kB' 'KernelStack: 6496 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.611 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.611 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.612 14:04:10 -- setup/common.sh@33 -- # echo 0 00:03:09.612 14:04:10 -- setup/common.sh@33 -- # return 0 00:03:09.612 14:04:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:09.612 14:04:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.612 14:04:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.612 14:04:10 -- setup/common.sh@18 -- # local node= 00:03:09.612 14:04:10 -- setup/common.sh@19 -- # local var val 00:03:09.612 14:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.612 14:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.612 14:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.612 14:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.612 14:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.612 14:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7887232 kB' 'MemAvailable: 9442708 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466800 kB' 'Inactive: 1421760 kB' 'Active(anon): 127576 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118680 kB' 'Mapped: 50828 kB' 'Shmem: 10492 kB' 'KReclaimable: 63288 kB' 'Slab: 161976 kB' 'SReclaimable: 63288 kB' 'SUnreclaim: 98688 kB' 'KernelStack: 6464 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.612 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.612 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.613 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.613 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.614 14:04:10 -- setup/common.sh@33 -- # echo 0 00:03:09.614 14:04:10 -- setup/common.sh@33 -- # return 0 00:03:09.614 nr_hugepages=1024 00:03:09.614 resv_hugepages=0 00:03:09.614 surplus_hugepages=0 00:03:09.614 anon_hugepages=0 00:03:09.614 14:04:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:09.614 14:04:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.614 14:04:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.614 14:04:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.614 14:04:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.614 14:04:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.614 14:04:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.614 14:04:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.614 14:04:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.614 14:04:10 -- setup/common.sh@18 -- # local node= 00:03:09.614 14:04:10 -- setup/common.sh@19 -- # local var val 00:03:09.614 14:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.614 14:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.614 14:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.614 14:04:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.614 14:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.614 14:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7887232 kB' 'MemAvailable: 9442708 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466720 kB' 'Inactive: 1421760 kB' 'Active(anon): 127496 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118596 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63288 kB' 'Slab: 161972 kB' 'SReclaimable: 63288 kB' 'SUnreclaim: 98684 kB' 'KernelStack: 6464 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.614 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.614 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.615 14:04:10 -- setup/common.sh@33 -- # echo 1024 00:03:09.615 14:04:10 -- setup/common.sh@33 -- # return 0 00:03:09.615 14:04:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.615 14:04:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.615 14:04:10 -- setup/hugepages.sh@27 -- # local node 00:03:09.615 14:04:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.615 14:04:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.615 14:04:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:09.615 14:04:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.615 14:04:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.615 14:04:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.615 14:04:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.615 14:04:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.615 14:04:10 -- setup/common.sh@18 -- # local node=0 00:03:09.615 14:04:10 -- setup/common.sh@19 -- # local var val 00:03:09.615 14:04:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.615 14:04:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.615 14:04:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.615 14:04:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.615 14:04:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.615 14:04:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7887232 kB' 'MemUsed: 4349840 kB' 'SwapCached: 0 kB' 'Active: 466536 kB' 'Inactive: 1421760 kB' 'Active(anon): 127312 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421760 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771480 kB' 'Mapped: 50736 kB' 'AnonPages: 118388 kB' 'Shmem: 10492 kB' 'KernelStack: 6484 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63288 kB' 'Slab: 161972 kB' 'SReclaimable: 63288 kB' 'SUnreclaim: 98684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.615 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.615 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # continue 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.616 14:04:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.616 14:04:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.616 14:04:10 -- setup/common.sh@33 -- # echo 0 00:03:09.616 14:04:10 -- setup/common.sh@33 -- # return 0 00:03:09.616 node0=1024 expecting 1024 00:03:09.616 ************************************ 00:03:09.616 END TEST default_setup 00:03:09.616 ************************************ 00:03:09.616 14:04:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.616 14:04:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.616 14:04:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.616 14:04:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.616 14:04:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:09.616 14:04:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:09.616 00:03:09.616 real 0m1.310s 00:03:09.616 user 0m0.507s 00:03:09.616 sys 0m0.626s 00:03:09.616 14:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:09.616 14:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:09.616 14:04:10 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:09.616 14:04:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.616 14:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.616 14:04:10 -- common/autotest_common.sh@10 -- # set +x 00:03:09.616 ************************************ 00:03:09.616 START TEST per_node_1G_alloc 00:03:09.616 ************************************ 00:03:09.616 14:04:11 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:09.616 14:04:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:09.616 14:04:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:09.616 14:04:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:09.616 14:04:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.616 14:04:11 -- setup/hugepages.sh@51 -- # shift 00:03:09.616 14:04:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.616 14:04:11 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.616 14:04:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.616 14:04:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:09.616 14:04:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.616 14:04:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.616 14:04:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.616 14:04:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:09.616 14:04:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:09.616 14:04:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.616 14:04:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.616 14:04:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.616 14:04:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.616 14:04:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.616 14:04:11 -- setup/hugepages.sh@73 -- # return 0 00:03:09.616 14:04:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:09.616 14:04:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:09.617 14:04:11 -- setup/hugepages.sh@146 -- # setup output 00:03:09.617 14:04:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.617 14:04:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:10.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:10.194 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.194 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.194 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.194 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.194 14:04:11 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:10.194 14:04:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:10.194 14:04:11 -- setup/hugepages.sh@89 -- # local node 00:03:10.194 14:04:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.194 14:04:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.194 14:04:11 -- setup/hugepages.sh@92 -- # local surp 00:03:10.194 14:04:11 -- setup/hugepages.sh@93 -- # local resv 00:03:10.194 14:04:11 -- setup/hugepages.sh@94 -- # local anon 00:03:10.194 14:04:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.194 14:04:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.194 14:04:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.194 14:04:11 -- setup/common.sh@18 -- # local node= 00:03:10.194 14:04:11 -- setup/common.sh@19 -- # local var val 00:03:10.194 14:04:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.194 14:04:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.194 14:04:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.194 14:04:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.194 14:04:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.194 14:04:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.194 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8932936 kB' 'MemAvailable: 10488416 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466748 kB' 'Inactive: 1421764 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118600 kB' 'Mapped: 50820 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161932 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98648 kB' 'KernelStack: 6456 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.195 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.195 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.196 14:04:11 -- setup/common.sh@33 -- # echo 0 00:03:10.196 14:04:11 -- setup/common.sh@33 -- # return 0 00:03:10.196 14:04:11 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.196 14:04:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.196 14:04:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.196 14:04:11 -- setup/common.sh@18 -- # local node= 00:03:10.196 14:04:11 -- setup/common.sh@19 -- # local var val 00:03:10.196 14:04:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.196 14:04:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.196 14:04:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.196 14:04:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.196 14:04:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.196 14:04:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8932936 kB' 'MemAvailable: 10488416 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466776 kB' 'Inactive: 1421764 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118652 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161952 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98668 kB' 'KernelStack: 6496 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.196 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.196 14:04:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.197 14:04:11 -- setup/common.sh@33 -- # echo 0 00:03:10.197 14:04:11 -- setup/common.sh@33 -- # return 0 00:03:10.197 14:04:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.197 14:04:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.197 14:04:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.197 14:04:11 -- setup/common.sh@18 -- # local node= 00:03:10.197 14:04:11 -- setup/common.sh@19 -- # local var val 00:03:10.197 14:04:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.197 14:04:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.197 14:04:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.197 14:04:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.197 14:04:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.197 14:04:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8932936 kB' 'MemAvailable: 10488416 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466756 kB' 'Inactive: 1421764 kB' 'Active(anon): 127532 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118656 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161944 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98660 kB' 'KernelStack: 6496 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.197 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.197 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.198 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.198 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.198 14:04:11 -- setup/common.sh@33 -- # echo 0 00:03:10.199 14:04:11 -- setup/common.sh@33 -- # return 0 00:03:10.199 nr_hugepages=512 00:03:10.199 14:04:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.199 14:04:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:10.199 resv_hugepages=0 00:03:10.199 surplus_hugepages=0 00:03:10.199 anon_hugepages=0 00:03:10.199 14:04:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.199 14:04:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.199 14:04:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.199 14:04:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:10.199 14:04:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:10.199 14:04:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.199 14:04:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.199 14:04:11 -- setup/common.sh@18 -- # local node= 00:03:10.199 14:04:11 -- setup/common.sh@19 -- # local var val 00:03:10.199 14:04:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.199 14:04:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.199 14:04:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.199 14:04:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.199 14:04:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.199 14:04:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8933196 kB' 'MemAvailable: 10488676 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466832 kB' 'Inactive: 1421764 kB' 'Active(anon): 127608 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118708 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161944 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98660 kB' 'KernelStack: 6512 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.199 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.199 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.200 14:04:11 -- setup/common.sh@33 -- # echo 512 00:03:10.200 14:04:11 -- setup/common.sh@33 -- # return 0 00:03:10.200 14:04:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:10.200 14:04:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.200 14:04:11 -- setup/hugepages.sh@27 -- # local node 00:03:10.200 14:04:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.200 14:04:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.200 14:04:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:10.200 14:04:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.200 14:04:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.200 14:04:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.200 14:04:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.200 14:04:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.200 14:04:11 -- setup/common.sh@18 -- # local node=0 00:03:10.200 14:04:11 -- setup/common.sh@19 -- # local var val 00:03:10.200 14:04:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.200 14:04:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.200 14:04:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.200 14:04:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.200 14:04:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.200 14:04:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8933392 kB' 'MemUsed: 3303680 kB' 'SwapCached: 0 kB' 'Active: 466756 kB' 'Inactive: 1421764 kB' 'Active(anon): 127532 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771480 kB' 'Mapped: 50692 kB' 'AnonPages: 118636 kB' 'Shmem: 10492 kB' 'KernelStack: 6480 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63284 kB' 'Slab: 161940 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.200 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.200 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # continue 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.201 14:04:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.201 14:04:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.201 14:04:11 -- setup/common.sh@33 -- # echo 0 00:03:10.201 14:04:11 -- setup/common.sh@33 -- # return 0 00:03:10.201 node0=512 expecting 512 00:03:10.201 ************************************ 00:03:10.201 END TEST per_node_1G_alloc 00:03:10.201 ************************************ 00:03:10.201 14:04:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.201 14:04:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.201 14:04:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.201 14:04:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.201 14:04:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.201 14:04:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.201 00:03:10.201 real 0m0.608s 00:03:10.201 user 0m0.258s 00:03:10.201 sys 0m0.347s 00:03:10.201 14:04:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:10.201 14:04:11 -- common/autotest_common.sh@10 -- # set +x 00:03:10.463 14:04:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:10.463 14:04:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.463 14:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.463 14:04:11 -- common/autotest_common.sh@10 -- # set +x 00:03:10.463 ************************************ 00:03:10.463 START TEST even_2G_alloc 00:03:10.463 ************************************ 00:03:10.463 14:04:11 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:10.463 14:04:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:10.463 14:04:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.463 14:04:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.463 14:04:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.463 14:04:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.463 14:04:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.463 14:04:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.463 14:04:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.463 14:04:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.463 14:04:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:10.463 14:04:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.463 14:04:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.463 14:04:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.463 14:04:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.463 14:04:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.463 14:04:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:10.463 14:04:11 -- setup/hugepages.sh@83 -- # : 0 00:03:10.463 14:04:11 -- setup/hugepages.sh@84 -- # : 0 00:03:10.463 14:04:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.463 14:04:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:10.463 14:04:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:10.463 14:04:11 -- setup/hugepages.sh@153 -- # setup output 00:03:10.463 14:04:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.463 14:04:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:10.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:10.727 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.727 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.727 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.727 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:10.727 14:04:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:10.727 14:04:12 -- setup/hugepages.sh@89 -- # local node 00:03:10.727 14:04:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.727 14:04:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.727 14:04:12 -- setup/hugepages.sh@92 -- # local surp 00:03:10.727 14:04:12 -- setup/hugepages.sh@93 -- # local resv 00:03:10.727 14:04:12 -- setup/hugepages.sh@94 -- # local anon 00:03:10.727 14:04:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.727 14:04:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.727 14:04:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.727 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:10.727 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:10.727 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.727 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.727 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.727 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.727 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.727 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7894816 kB' 'MemAvailable: 9450296 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 467008 kB' 'Inactive: 1421764 kB' 'Active(anon): 127784 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118864 kB' 'Mapped: 50880 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 162004 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98720 kB' 'KernelStack: 6536 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.727 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.727 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.728 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:10.728 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:10.728 14:04:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:10.728 14:04:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.728 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.728 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:10.728 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:10.728 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.728 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.728 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.728 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.728 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.728 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7895336 kB' 'MemAvailable: 9450816 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 467080 kB' 'Inactive: 1421764 kB' 'Active(anon): 127856 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118912 kB' 'Mapped: 50824 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 162012 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98728 kB' 'KernelStack: 6512 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.728 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.728 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.992 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.992 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.993 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:10.993 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:10.993 14:04:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:10.993 14:04:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.993 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.993 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:10.993 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:10.993 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.993 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.993 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.993 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.993 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.993 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.993 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.993 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7895192 kB' 'MemAvailable: 9450672 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466876 kB' 'Inactive: 1421764 kB' 'Active(anon): 127652 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 162012 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98728 kB' 'KernelStack: 6480 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.993 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.994 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.994 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.995 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:10.995 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:10.995 nr_hugepages=1024 00:03:10.995 14:04:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:10.995 14:04:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.995 resv_hugepages=0 00:03:10.995 14:04:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.995 surplus_hugepages=0 00:03:10.995 anon_hugepages=0 00:03:10.995 14:04:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.995 14:04:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.995 14:04:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.995 14:04:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.995 14:04:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.995 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.995 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:10.995 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:10.995 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.995 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.995 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.995 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.995 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.995 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7895552 kB' 'MemAvailable: 9451032 kB' 'Buffers: 2684 kB' 'Cached: 1768796 kB' 'SwapCached: 0 kB' 'Active: 466844 kB' 'Inactive: 1421764 kB' 'Active(anon): 127620 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118672 kB' 'Mapped: 50744 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 162012 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98728 kB' 'KernelStack: 6464 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.995 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.995 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.996 14:04:12 -- setup/common.sh@33 -- # echo 1024 00:03:10.996 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:10.996 14:04:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.996 14:04:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.996 14:04:12 -- setup/hugepages.sh@27 -- # local node 00:03:10.996 14:04:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.996 14:04:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.996 14:04:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:10.996 14:04:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.996 14:04:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.996 14:04:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.996 14:04:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.996 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.996 14:04:12 -- setup/common.sh@18 -- # local node=0 00:03:10.996 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:10.996 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:10.996 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.996 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.996 14:04:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.996 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.996 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7895552 kB' 'MemUsed: 4341520 kB' 'SwapCached: 0 kB' 'Active: 466792 kB' 'Inactive: 1421764 kB' 'Active(anon): 127568 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771480 kB' 'Mapped: 50692 kB' 'AnonPages: 118656 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63284 kB' 'Slab: 162008 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.996 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.996 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # continue 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:10.997 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:10.997 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.997 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:10.997 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:10.997 node0=1024 expecting 1024 00:03:10.997 14:04:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.997 14:04:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.997 14:04:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.997 14:04:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.997 14:04:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.998 14:04:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.998 00:03:10.998 real 0m0.609s 00:03:10.998 user 0m0.264s 00:03:10.998 sys 0m0.349s 00:03:10.998 ************************************ 00:03:10.998 END TEST even_2G_alloc 00:03:10.998 ************************************ 00:03:10.998 14:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:10.998 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:03:10.998 14:04:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:10.998 14:04:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.998 14:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.998 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:03:10.998 ************************************ 00:03:10.998 START TEST odd_alloc 00:03:10.998 ************************************ 00:03:10.998 14:04:12 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:10.998 14:04:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:10.998 14:04:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:10.998 14:04:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.998 14:04:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.998 14:04:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:10.998 14:04:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.998 14:04:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.998 14:04:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.998 14:04:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:10.998 14:04:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:10.998 14:04:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.998 14:04:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.998 14:04:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.998 14:04:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.998 14:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.998 14:04:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:10.998 14:04:12 -- setup/hugepages.sh@83 -- # : 0 00:03:10.998 14:04:12 -- setup/hugepages.sh@84 -- # : 0 00:03:10.998 14:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.998 14:04:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:10.998 14:04:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:10.998 14:04:12 -- setup/hugepages.sh@160 -- # setup output 00:03:10.998 14:04:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.998 14:04:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:11.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:11.575 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:11.575 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:11.575 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:11.575 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:11.575 14:04:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:11.575 14:04:12 -- setup/hugepages.sh@89 -- # local node 00:03:11.575 14:04:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.575 14:04:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.575 14:04:12 -- setup/hugepages.sh@92 -- # local surp 00:03:11.575 14:04:12 -- setup/hugepages.sh@93 -- # local resv 00:03:11.575 14:04:12 -- setup/hugepages.sh@94 -- # local anon 00:03:11.575 14:04:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.575 14:04:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.575 14:04:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.575 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:11.575 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:11.575 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.575 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.575 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.575 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.575 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.575 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7904784 kB' 'MemAvailable: 9460268 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 467348 kB' 'Inactive: 1421768 kB' 'Active(anon): 128124 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119204 kB' 'Mapped: 50844 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161888 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98604 kB' 'KernelStack: 6556 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457540 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.575 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.575 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.576 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.576 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.577 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:11.577 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:11.577 14:04:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.577 14:04:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.577 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.577 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:11.577 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:11.577 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.577 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.577 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.577 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.577 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.577 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7904784 kB' 'MemAvailable: 9460268 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466948 kB' 'Inactive: 1421768 kB' 'Active(anon): 127724 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50792 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161868 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98584 kB' 'KernelStack: 6480 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457540 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.577 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.577 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.578 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.578 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.579 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:11.579 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:11.579 14:04:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.579 14:04:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.579 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.579 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:11.579 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:11.579 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.579 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.579 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.579 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.579 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.579 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7904532 kB' 'MemAvailable: 9460016 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466600 kB' 'Inactive: 1421768 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161868 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98584 kB' 'KernelStack: 6512 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457540 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.579 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.579 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.580 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.580 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.580 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:11.580 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:11.580 14:04:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.580 nr_hugepages=1025 00:03:11.580 resv_hugepages=0 00:03:11.580 surplus_hugepages=0 00:03:11.580 anon_hugepages=0 00:03:11.581 14:04:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:11.581 14:04:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.581 14:04:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.581 14:04:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.581 14:04:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.581 14:04:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:11.581 14:04:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.581 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.581 14:04:12 -- setup/common.sh@18 -- # local node= 00:03:11.581 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:11.581 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.581 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.581 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.581 14:04:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.581 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.581 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7904532 kB' 'MemAvailable: 9460016 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466804 kB' 'Inactive: 1421768 kB' 'Active(anon): 127580 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118668 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161868 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98584 kB' 'KernelStack: 6496 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457540 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.581 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.581 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.582 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.582 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.583 14:04:12 -- setup/common.sh@33 -- # echo 1025 00:03:11.583 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:11.583 14:04:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.583 14:04:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.583 14:04:12 -- setup/hugepages.sh@27 -- # local node 00:03:11.583 14:04:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.583 14:04:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:11.583 14:04:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:11.583 14:04:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.583 14:04:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.583 14:04:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.583 14:04:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.583 14:04:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.583 14:04:12 -- setup/common.sh@18 -- # local node=0 00:03:11.583 14:04:12 -- setup/common.sh@19 -- # local var val 00:03:11.583 14:04:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.583 14:04:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.583 14:04:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.583 14:04:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.583 14:04:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.583 14:04:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7904904 kB' 'MemUsed: 4332168 kB' 'SwapCached: 0 kB' 'Active: 466804 kB' 'Inactive: 1421768 kB' 'Active(anon): 127580 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771484 kB' 'Mapped: 50692 kB' 'AnonPages: 118664 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63284 kB' 'Slab: 161868 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.583 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.583 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # continue 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.584 14:04:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.584 14:04:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.584 14:04:12 -- setup/common.sh@33 -- # echo 0 00:03:11.584 14:04:12 -- setup/common.sh@33 -- # return 0 00:03:11.584 node0=1025 expecting 1025 00:03:11.584 ************************************ 00:03:11.584 END TEST odd_alloc 00:03:11.584 ************************************ 00:03:11.584 14:04:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.584 14:04:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.584 14:04:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.584 14:04:12 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:11.584 14:04:12 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:11.584 00:03:11.584 real 0m0.587s 00:03:11.584 user 0m0.242s 00:03:11.584 sys 0m0.353s 00:03:11.584 14:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:11.584 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:03:11.584 14:04:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:11.584 14:04:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.584 14:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.584 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:03:11.584 ************************************ 00:03:11.584 START TEST custom_alloc 00:03:11.584 ************************************ 00:03:11.584 14:04:12 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:11.584 14:04:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:11.584 14:04:12 -- setup/hugepages.sh@169 -- # local node 00:03:11.584 14:04:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:11.584 14:04:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:11.584 14:04:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:11.584 14:04:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:11.584 14:04:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.584 14:04:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.584 14:04:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.584 14:04:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.584 14:04:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.584 14:04:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.584 14:04:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:11.584 14:04:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.584 14:04:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.584 14:04:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.584 14:04:12 -- setup/hugepages.sh@83 -- # : 0 00:03:11.584 14:04:12 -- setup/hugepages.sh@84 -- # : 0 00:03:11.584 14:04:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:11.584 14:04:12 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:11.584 14:04:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.584 14:04:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.584 14:04:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.584 14:04:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:11.584 14:04:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.584 14:04:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.584 14:04:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.584 14:04:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:11.584 14:04:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.584 14:04:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.584 14:04:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.584 14:04:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:11.585 14:04:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.585 14:04:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.585 14:04:13 -- setup/hugepages.sh@78 -- # return 0 00:03:11.585 14:04:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:11.585 14:04:13 -- setup/hugepages.sh@187 -- # setup output 00:03:11.585 14:04:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.585 14:04:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:12.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:12.164 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.164 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.164 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.164 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.164 14:04:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:12.164 14:04:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:12.164 14:04:13 -- setup/hugepages.sh@89 -- # local node 00:03:12.164 14:04:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.164 14:04:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.164 14:04:13 -- setup/hugepages.sh@92 -- # local surp 00:03:12.164 14:04:13 -- setup/hugepages.sh@93 -- # local resv 00:03:12.164 14:04:13 -- setup/hugepages.sh@94 -- # local anon 00:03:12.164 14:04:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.164 14:04:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.164 14:04:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.164 14:04:13 -- setup/common.sh@18 -- # local node= 00:03:12.164 14:04:13 -- setup/common.sh@19 -- # local var val 00:03:12.164 14:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.164 14:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.164 14:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.164 14:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.164 14:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.164 14:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8957760 kB' 'MemAvailable: 10513244 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466944 kB' 'Inactive: 1421768 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118832 kB' 'Mapped: 50932 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161836 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98552 kB' 'KernelStack: 6540 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.164 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.164 14:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.165 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.165 14:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.165 14:04:13 -- setup/common.sh@33 -- # echo 0 00:03:12.165 14:04:13 -- setup/common.sh@33 -- # return 0 00:03:12.166 14:04:13 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.166 14:04:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.166 14:04:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.166 14:04:13 -- setup/common.sh@18 -- # local node= 00:03:12.166 14:04:13 -- setup/common.sh@19 -- # local var val 00:03:12.166 14:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.166 14:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.166 14:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.166 14:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.166 14:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.166 14:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8957764 kB' 'MemAvailable: 10513248 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 467188 kB' 'Inactive: 1421768 kB' 'Active(anon): 127964 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119052 kB' 'Mapped: 50780 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161832 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98548 kB' 'KernelStack: 6508 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.166 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.166 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.167 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.167 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.168 14:04:13 -- setup/common.sh@33 -- # echo 0 00:03:12.168 14:04:13 -- setup/common.sh@33 -- # return 0 00:03:12.168 14:04:13 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.168 14:04:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.168 14:04:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.168 14:04:13 -- setup/common.sh@18 -- # local node= 00:03:12.168 14:04:13 -- setup/common.sh@19 -- # local var val 00:03:12.168 14:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.168 14:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.168 14:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.168 14:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.168 14:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.168 14:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8957764 kB' 'MemAvailable: 10513248 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466680 kB' 'Inactive: 1421768 kB' 'Active(anon): 127456 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118556 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161844 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98560 kB' 'KernelStack: 6480 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.168 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.168 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.169 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.169 14:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.170 14:04:13 -- setup/common.sh@33 -- # echo 0 00:03:12.170 14:04:13 -- setup/common.sh@33 -- # return 0 00:03:12.170 nr_hugepages=512 00:03:12.170 resv_hugepages=0 00:03:12.170 surplus_hugepages=0 00:03:12.170 anon_hugepages=0 00:03:12.170 14:04:13 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.170 14:04:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:12.170 14:04:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.170 14:04:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.170 14:04:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.170 14:04:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:12.170 14:04:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:12.170 14:04:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.170 14:04:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.170 14:04:13 -- setup/common.sh@18 -- # local node= 00:03:12.170 14:04:13 -- setup/common.sh@19 -- # local var val 00:03:12.170 14:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.170 14:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.170 14:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.170 14:04:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.170 14:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.170 14:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8957764 kB' 'MemAvailable: 10513248 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466940 kB' 'Inactive: 1421768 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118816 kB' 'Mapped: 50692 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161844 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98560 kB' 'KernelStack: 6480 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982852 kB' 'Committed_AS: 322804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.170 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.170 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.171 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.171 14:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.172 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.172 14:04:13 -- setup/common.sh@33 -- # echo 512 00:03:12.172 14:04:13 -- setup/common.sh@33 -- # return 0 00:03:12.172 14:04:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:12.172 14:04:13 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.172 14:04:13 -- setup/hugepages.sh@27 -- # local node 00:03:12.172 14:04:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.172 14:04:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.172 14:04:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:12.172 14:04:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.172 14:04:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.172 14:04:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.172 14:04:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.172 14:04:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.172 14:04:13 -- setup/common.sh@18 -- # local node=0 00:03:12.172 14:04:13 -- setup/common.sh@19 -- # local var val 00:03:12.172 14:04:13 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.172 14:04:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.172 14:04:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.172 14:04:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.172 14:04:13 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.172 14:04:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.172 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 8957764 kB' 'MemUsed: 3279308 kB' 'SwapCached: 0 kB' 'Active: 466868 kB' 'Inactive: 1421768 kB' 'Active(anon): 127644 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771484 kB' 'Mapped: 50692 kB' 'AnonPages: 118744 kB' 'Shmem: 10492 kB' 'KernelStack: 6448 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63284 kB' 'Slab: 161840 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.173 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.173 14:04:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # continue 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.174 14:04:13 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.174 14:04:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.174 14:04:13 -- setup/common.sh@33 -- # echo 0 00:03:12.174 14:04:13 -- setup/common.sh@33 -- # return 0 00:03:12.174 node0=512 expecting 512 00:03:12.174 14:04:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.174 14:04:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.174 14:04:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.174 14:04:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.174 14:04:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:12.174 14:04:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:12.174 00:03:12.174 real 0m0.589s 00:03:12.174 user 0m0.266s 00:03:12.174 sys 0m0.329s 00:03:12.174 ************************************ 00:03:12.174 END TEST custom_alloc 00:03:12.174 ************************************ 00:03:12.174 14:04:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:12.174 14:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.438 14:04:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:12.438 14:04:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.438 14:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.438 14:04:13 -- common/autotest_common.sh@10 -- # set +x 00:03:12.438 ************************************ 00:03:12.438 START TEST no_shrink_alloc 00:03:12.438 ************************************ 00:03:12.438 14:04:13 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:12.438 14:04:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:12.438 14:04:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.438 14:04:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.438 14:04:13 -- setup/hugepages.sh@51 -- # shift 00:03:12.438 14:04:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.438 14:04:13 -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.438 14:04:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.438 14:04:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.438 14:04:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.438 14:04:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.438 14:04:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.438 14:04:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.438 14:04:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:12.438 14:04:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.438 14:04:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.438 14:04:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.438 14:04:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.438 14:04:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.438 14:04:13 -- setup/hugepages.sh@73 -- # return 0 00:03:12.438 14:04:13 -- setup/hugepages.sh@198 -- # setup output 00:03:12.438 14:04:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.438 14:04:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:12.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:12.701 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.701 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.701 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.701 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:12.701 14:04:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:12.701 14:04:14 -- setup/hugepages.sh@89 -- # local node 00:03:12.701 14:04:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.701 14:04:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.701 14:04:14 -- setup/hugepages.sh@92 -- # local surp 00:03:12.701 14:04:14 -- setup/hugepages.sh@93 -- # local resv 00:03:12.701 14:04:14 -- setup/hugepages.sh@94 -- # local anon 00:03:12.701 14:04:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.701 14:04:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.701 14:04:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.701 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:12.701 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:12.701 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.701 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.701 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.701 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.701 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.701 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915276 kB' 'MemAvailable: 9470760 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466168 kB' 'Inactive: 1421768 kB' 'Active(anon): 126944 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117868 kB' 'Mapped: 50144 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161780 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98496 kB' 'KernelStack: 6528 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.701 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.701 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.702 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.702 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.702 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:12.702 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:12.702 14:04:14 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.702 14:04:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.702 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.702 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:12.702 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:12.702 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.703 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.703 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.703 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.703 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.703 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915268 kB' 'MemAvailable: 9470752 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 465864 kB' 'Inactive: 1421768 kB' 'Active(anon): 126640 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117756 kB' 'Mapped: 49996 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161772 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98488 kB' 'KernelStack: 6452 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.703 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.703 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.968 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.968 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.969 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.969 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.969 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:12.969 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:12.969 14:04:14 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.969 14:04:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.969 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.969 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:12.969 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:12.969 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.969 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.969 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.969 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.969 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.969 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.970 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915268 kB' 'MemAvailable: 9470752 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 465900 kB' 'Inactive: 1421768 kB' 'Active(anon): 126676 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117856 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161772 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98488 kB' 'KernelStack: 6468 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.970 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.970 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.971 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:12.971 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:12.971 14:04:14 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.971 14:04:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.971 nr_hugepages=1024 00:03:12.971 resv_hugepages=0 00:03:12.971 surplus_hugepages=0 00:03:12.971 anon_hugepages=0 00:03:12.971 14:04:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.971 14:04:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.971 14:04:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.971 14:04:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.971 14:04:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.971 14:04:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.971 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.971 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:12.971 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:12.971 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.971 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.971 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.971 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.971 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.971 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915520 kB' 'MemAvailable: 9471004 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 465768 kB' 'Inactive: 1421768 kB' 'Active(anon): 126544 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117688 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161768 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98484 kB' 'KernelStack: 6436 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.971 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.971 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.972 14:04:14 -- setup/common.sh@33 -- # echo 1024 00:03:12.972 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:12.972 14:04:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.972 14:04:14 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.972 14:04:14 -- setup/hugepages.sh@27 -- # local node 00:03:12.972 14:04:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.972 14:04:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.972 14:04:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:12.972 14:04:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.972 14:04:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.972 14:04:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.972 14:04:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.972 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.972 14:04:14 -- setup/common.sh@18 -- # local node=0 00:03:12.972 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:12.972 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.972 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.972 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.972 14:04:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.972 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.972 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915520 kB' 'MemUsed: 4321552 kB' 'SwapCached: 0 kB' 'Active: 465572 kB' 'Inactive: 1421768 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771484 kB' 'Mapped: 49872 kB' 'AnonPages: 117508 kB' 'Shmem: 10492 kB' 'KernelStack: 6452 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63284 kB' 'Slab: 161768 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.972 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.972 14:04:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # continue 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.973 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.973 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.974 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:12.974 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:12.974 node0=1024 expecting 1024 00:03:12.974 14:04:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.974 14:04:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.974 14:04:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.974 14:04:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.974 14:04:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.974 14:04:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.974 14:04:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:12.974 14:04:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:12.974 14:04:14 -- setup/hugepages.sh@202 -- # setup output 00:03:12.974 14:04:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.974 14:04:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:13.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:13.236 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.236 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.236 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.236 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:13.501 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:13.501 14:04:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:13.501 14:04:14 -- setup/hugepages.sh@89 -- # local node 00:03:13.501 14:04:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.501 14:04:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.501 14:04:14 -- setup/hugepages.sh@92 -- # local surp 00:03:13.501 14:04:14 -- setup/hugepages.sh@93 -- # local resv 00:03:13.501 14:04:14 -- setup/hugepages.sh@94 -- # local anon 00:03:13.501 14:04:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.501 14:04:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.501 14:04:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.501 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:13.501 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:13.501 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.501 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.501 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.501 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.501 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.501 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.501 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7914692 kB' 'MemAvailable: 9470176 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466264 kB' 'Inactive: 1421768 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118152 kB' 'Mapped: 49976 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161744 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98460 kB' 'KernelStack: 6628 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55640 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.502 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.503 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:13.503 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:13.503 14:04:14 -- setup/hugepages.sh@97 -- # anon=0 00:03:13.503 14:04:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.503 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.503 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:13.503 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:13.503 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.503 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.503 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.503 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.503 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.503 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7914824 kB' 'MemAvailable: 9470308 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 466152 kB' 'Inactive: 1421768 kB' 'Active(anon): 126928 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117780 kB' 'Mapped: 49976 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161744 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98460 kB' 'KernelStack: 6652 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.503 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.504 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.504 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.505 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:13.505 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:13.505 14:04:14 -- setup/hugepages.sh@99 -- # surp=0 00:03:13.505 14:04:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.505 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.505 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:13.505 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:13.505 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.505 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.505 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.505 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.505 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.505 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.505 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915084 kB' 'MemAvailable: 9470568 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 465968 kB' 'Inactive: 1421768 kB' 'Active(anon): 126744 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117820 kB' 'Mapped: 49872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161768 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98484 kB' 'KernelStack: 6576 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.505 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.505 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.506 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.506 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.507 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.507 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.507 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:13.507 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:13.507 nr_hugepages=1024 00:03:13.507 resv_hugepages=0 00:03:13.507 surplus_hugepages=0 00:03:13.507 anon_hugepages=0 00:03:13.507 14:04:14 -- setup/hugepages.sh@100 -- # resv=0 00:03:13.507 14:04:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.507 14:04:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.507 14:04:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.507 14:04:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.507 14:04:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.507 14:04:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.507 14:04:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.507 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.507 14:04:14 -- setup/common.sh@18 -- # local node= 00:03:13.507 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:13.508 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.508 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.508 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.508 14:04:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.508 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.508 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915116 kB' 'MemAvailable: 9470600 kB' 'Buffers: 2684 kB' 'Cached: 1768800 kB' 'SwapCached: 0 kB' 'Active: 465416 kB' 'Inactive: 1421768 kB' 'Active(anon): 126192 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117268 kB' 'Mapped: 49804 kB' 'Shmem: 10492 kB' 'KReclaimable: 63284 kB' 'Slab: 161768 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98484 kB' 'KernelStack: 6464 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 314736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.508 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.508 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.509 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.509 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.510 14:04:14 -- setup/common.sh@33 -- # echo 1024 00:03:13.510 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:13.510 14:04:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.510 14:04:14 -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.510 14:04:14 -- setup/hugepages.sh@27 -- # local node 00:03:13.510 14:04:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.510 14:04:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.510 14:04:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:13.510 14:04:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.510 14:04:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.510 14:04:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.510 14:04:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.510 14:04:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.510 14:04:14 -- setup/common.sh@18 -- # local node=0 00:03:13.510 14:04:14 -- setup/common.sh@19 -- # local var val 00:03:13.510 14:04:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:13.510 14:04:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.510 14:04:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.510 14:04:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.510 14:04:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.510 14:04:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237072 kB' 'MemFree: 7915116 kB' 'MemUsed: 4321956 kB' 'SwapCached: 0 kB' 'Active: 465556 kB' 'Inactive: 1421768 kB' 'Active(anon): 126332 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771484 kB' 'Mapped: 49872 kB' 'AnonPages: 117444 kB' 'Shmem: 10492 kB' 'KernelStack: 6416 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63284 kB' 'Slab: 161768 kB' 'SReclaimable: 63284 kB' 'SUnreclaim: 98484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.510 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.510 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # continue 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:13.511 14:04:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:13.511 14:04:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.511 14:04:14 -- setup/common.sh@33 -- # echo 0 00:03:13.511 14:04:14 -- setup/common.sh@33 -- # return 0 00:03:13.511 14:04:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.511 14:04:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.511 node0=1024 expecting 1024 00:03:13.511 ************************************ 00:03:13.511 END TEST no_shrink_alloc 00:03:13.511 ************************************ 00:03:13.511 14:04:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.511 14:04:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.511 14:04:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.511 14:04:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.511 00:03:13.511 real 0m1.168s 00:03:13.511 user 0m0.502s 00:03:13.511 sys 0m0.688s 00:03:13.511 14:04:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:13.511 14:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:13.511 14:04:14 -- setup/hugepages.sh@217 -- # clear_hp 00:03:13.511 14:04:14 -- setup/hugepages.sh@37 -- # local node hp 00:03:13.512 14:04:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.512 14:04:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.512 14:04:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:13.512 14:04:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.512 14:04:14 -- setup/hugepages.sh@41 -- # echo 0 00:03:13.512 14:04:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:13.512 14:04:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:13.512 00:03:13.512 real 0m5.456s 00:03:13.512 user 0m2.222s 00:03:13.512 sys 0m2.948s 00:03:13.512 14:04:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:13.512 ************************************ 00:03:13.512 END TEST hugepages 00:03:13.512 ************************************ 00:03:13.512 14:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:13.512 14:04:14 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:13.512 14:04:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:13.512 14:04:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:13.512 14:04:14 -- common/autotest_common.sh@10 -- # set +x 00:03:13.512 ************************************ 00:03:13.512 START TEST driver 00:03:13.512 ************************************ 00:03:13.512 14:04:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:13.777 * Looking for test storage... 00:03:13.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:13.777 14:04:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:13.777 14:04:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:13.777 14:04:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:13.777 14:04:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:13.777 14:04:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:13.777 14:04:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:13.777 14:04:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:13.777 14:04:15 -- scripts/common.sh@335 -- # IFS=.-: 00:03:13.777 14:04:15 -- scripts/common.sh@335 -- # read -ra ver1 00:03:13.777 14:04:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:13.777 14:04:15 -- scripts/common.sh@336 -- # read -ra ver2 00:03:13.777 14:04:15 -- scripts/common.sh@337 -- # local 'op=<' 00:03:13.777 14:04:15 -- scripts/common.sh@339 -- # ver1_l=2 00:03:13.777 14:04:15 -- scripts/common.sh@340 -- # ver2_l=1 00:03:13.777 14:04:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:13.777 14:04:15 -- scripts/common.sh@343 -- # case "$op" in 00:03:13.777 14:04:15 -- scripts/common.sh@344 -- # : 1 00:03:13.777 14:04:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:13.777 14:04:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:13.777 14:04:15 -- scripts/common.sh@364 -- # decimal 1 00:03:13.777 14:04:15 -- scripts/common.sh@352 -- # local d=1 00:03:13.777 14:04:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:13.777 14:04:15 -- scripts/common.sh@354 -- # echo 1 00:03:13.777 14:04:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:13.777 14:04:15 -- scripts/common.sh@365 -- # decimal 2 00:03:13.777 14:04:15 -- scripts/common.sh@352 -- # local d=2 00:03:13.777 14:04:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:13.777 14:04:15 -- scripts/common.sh@354 -- # echo 2 00:03:13.777 14:04:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:13.777 14:04:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:13.777 14:04:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:13.777 14:04:15 -- scripts/common.sh@367 -- # return 0 00:03:13.777 14:04:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:13.777 14:04:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:13.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.777 --rc genhtml_branch_coverage=1 00:03:13.777 --rc genhtml_function_coverage=1 00:03:13.777 --rc genhtml_legend=1 00:03:13.777 --rc geninfo_all_blocks=1 00:03:13.777 --rc geninfo_unexecuted_blocks=1 00:03:13.777 00:03:13.777 ' 00:03:13.777 14:04:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:13.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.777 --rc genhtml_branch_coverage=1 00:03:13.777 --rc genhtml_function_coverage=1 00:03:13.777 --rc genhtml_legend=1 00:03:13.777 --rc geninfo_all_blocks=1 00:03:13.777 --rc geninfo_unexecuted_blocks=1 00:03:13.777 00:03:13.777 ' 00:03:13.777 14:04:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:13.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.777 --rc genhtml_branch_coverage=1 00:03:13.777 --rc genhtml_function_coverage=1 00:03:13.777 --rc genhtml_legend=1 00:03:13.777 --rc geninfo_all_blocks=1 00:03:13.777 --rc geninfo_unexecuted_blocks=1 00:03:13.777 00:03:13.777 ' 00:03:13.777 14:04:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:13.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.777 --rc genhtml_branch_coverage=1 00:03:13.777 --rc genhtml_function_coverage=1 00:03:13.777 --rc genhtml_legend=1 00:03:13.777 --rc geninfo_all_blocks=1 00:03:13.777 --rc geninfo_unexecuted_blocks=1 00:03:13.777 00:03:13.777 ' 00:03:13.777 14:04:15 -- setup/driver.sh@68 -- # setup reset 00:03:13.777 14:04:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.777 14:04:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:20.421 14:04:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:20.421 14:04:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.421 14:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.421 14:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:20.421 ************************************ 00:03:20.421 START TEST guess_driver 00:03:20.421 ************************************ 00:03:20.421 14:04:20 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:20.421 14:04:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:20.421 14:04:20 -- setup/driver.sh@47 -- # local fail=0 00:03:20.421 14:04:20 -- setup/driver.sh@49 -- # pick_driver 00:03:20.421 14:04:20 -- setup/driver.sh@36 -- # vfio 00:03:20.421 14:04:20 -- setup/driver.sh@21 -- # local iommu_grups 00:03:20.421 14:04:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:20.421 14:04:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:20.422 14:04:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:20.422 14:04:20 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:20.422 14:04:20 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:20.422 14:04:20 -- setup/driver.sh@32 -- # return 1 00:03:20.422 14:04:20 -- setup/driver.sh@38 -- # uio 00:03:20.422 14:04:20 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:20.422 14:04:20 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:20.422 14:04:20 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:20.422 14:04:20 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:20.422 14:04:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:20.422 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:20.422 14:04:20 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:20.422 Looking for driver=uio_pci_generic 00:03:20.422 14:04:20 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:20.422 14:04:20 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:20.422 14:04:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:20.422 14:04:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.422 14:04:20 -- setup/driver.sh@45 -- # setup output config 00:03:20.422 14:04:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.422 14:04:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:20.683 14:04:21 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:20.683 14:04:21 -- setup/driver.sh@58 -- # continue 00:03:20.683 14:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.683 14:04:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.683 14:04:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:20.683 14:04:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.683 14:04:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.683 14:04:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:20.683 14:04:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.683 14:04:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.683 14:04:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:20.683 14:04:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.683 14:04:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:20.683 14:04:22 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:20.683 14:04:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.945 14:04:22 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:20.945 14:04:22 -- setup/driver.sh@65 -- # setup reset 00:03:20.945 14:04:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.945 14:04:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.541 00:03:27.541 real 0m7.037s 00:03:27.541 user 0m0.696s 00:03:27.541 sys 0m1.248s 00:03:27.541 14:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:27.541 14:04:28 -- common/autotest_common.sh@10 -- # set +x 00:03:27.541 ************************************ 00:03:27.541 END TEST guess_driver 00:03:27.541 ************************************ 00:03:27.541 ************************************ 00:03:27.541 END TEST driver 00:03:27.541 ************************************ 00:03:27.541 00:03:27.541 real 0m13.116s 00:03:27.541 user 0m1.079s 00:03:27.541 sys 0m1.969s 00:03:27.541 14:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:27.541 14:04:28 -- common/autotest_common.sh@10 -- # set +x 00:03:27.541 14:04:28 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:27.541 14:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.541 14:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.541 14:04:28 -- common/autotest_common.sh@10 -- # set +x 00:03:27.541 ************************************ 00:03:27.541 START TEST devices 00:03:27.541 ************************************ 00:03:27.542 14:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:27.542 * Looking for test storage... 00:03:27.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.542 14:04:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:27.542 14:04:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:27.542 14:04:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:27.542 14:04:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:27.542 14:04:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:27.542 14:04:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:27.542 14:04:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:27.542 14:04:28 -- scripts/common.sh@335 -- # IFS=.-: 00:03:27.542 14:04:28 -- scripts/common.sh@335 -- # read -ra ver1 00:03:27.542 14:04:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.542 14:04:28 -- scripts/common.sh@336 -- # read -ra ver2 00:03:27.542 14:04:28 -- scripts/common.sh@337 -- # local 'op=<' 00:03:27.542 14:04:28 -- scripts/common.sh@339 -- # ver1_l=2 00:03:27.542 14:04:28 -- scripts/common.sh@340 -- # ver2_l=1 00:03:27.542 14:04:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:27.542 14:04:28 -- scripts/common.sh@343 -- # case "$op" in 00:03:27.542 14:04:28 -- scripts/common.sh@344 -- # : 1 00:03:27.542 14:04:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:27.542 14:04:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.542 14:04:28 -- scripts/common.sh@364 -- # decimal 1 00:03:27.542 14:04:28 -- scripts/common.sh@352 -- # local d=1 00:03:27.542 14:04:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.542 14:04:28 -- scripts/common.sh@354 -- # echo 1 00:03:27.542 14:04:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:27.542 14:04:28 -- scripts/common.sh@365 -- # decimal 2 00:03:27.542 14:04:28 -- scripts/common.sh@352 -- # local d=2 00:03:27.542 14:04:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.542 14:04:28 -- scripts/common.sh@354 -- # echo 2 00:03:27.542 14:04:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:27.542 14:04:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:27.542 14:04:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:27.542 14:04:28 -- scripts/common.sh@367 -- # return 0 00:03:27.542 14:04:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.542 14:04:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.542 --rc genhtml_branch_coverage=1 00:03:27.542 --rc genhtml_function_coverage=1 00:03:27.542 --rc genhtml_legend=1 00:03:27.542 --rc geninfo_all_blocks=1 00:03:27.542 --rc geninfo_unexecuted_blocks=1 00:03:27.542 00:03:27.542 ' 00:03:27.542 14:04:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.542 --rc genhtml_branch_coverage=1 00:03:27.542 --rc genhtml_function_coverage=1 00:03:27.542 --rc genhtml_legend=1 00:03:27.542 --rc geninfo_all_blocks=1 00:03:27.542 --rc geninfo_unexecuted_blocks=1 00:03:27.542 00:03:27.542 ' 00:03:27.542 14:04:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.542 --rc genhtml_branch_coverage=1 00:03:27.542 --rc genhtml_function_coverage=1 00:03:27.542 --rc genhtml_legend=1 00:03:27.542 --rc geninfo_all_blocks=1 00:03:27.542 --rc geninfo_unexecuted_blocks=1 00:03:27.542 00:03:27.542 ' 00:03:27.542 14:04:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:27.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.542 --rc genhtml_branch_coverage=1 00:03:27.542 --rc genhtml_function_coverage=1 00:03:27.542 --rc genhtml_legend=1 00:03:27.542 --rc geninfo_all_blocks=1 00:03:27.542 --rc geninfo_unexecuted_blocks=1 00:03:27.542 00:03:27.542 ' 00:03:27.542 14:04:28 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:27.542 14:04:28 -- setup/devices.sh@192 -- # setup reset 00:03:27.542 14:04:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.542 14:04:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.119 14:04:29 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:28.119 14:04:29 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:28.119 14:04:29 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:28.119 14:04:29 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:28.119 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.119 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:03:28.119 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:03:28.120 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.120 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:28.120 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:28.120 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.120 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:28.120 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:28.120 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.120 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:28.120 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:28.120 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.120 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:28.120 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:28.120 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.120 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.121 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:28.121 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:28.121 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:28.121 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.121 14:04:29 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:28.121 14:04:29 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:28.121 14:04:29 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:28.121 14:04:29 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:28.121 14:04:29 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:28.121 14:04:29 -- setup/devices.sh@196 -- # blocks=() 00:03:28.121 14:04:29 -- setup/devices.sh@196 -- # declare -a blocks 00:03:28.121 14:04:29 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:28.121 14:04:29 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:28.121 14:04:29 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:28.121 14:04:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.121 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:28.121 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:28.121 14:04:29 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:03:28.121 14:04:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:28.121 14:04:29 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:28.121 14:04:29 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:28.121 14:04:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:28.121 No valid GPT data, bailing 00:03:28.121 14:04:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.121 14:04:29 -- scripts/common.sh@393 -- # pt= 00:03:28.121 14:04:29 -- scripts/common.sh@394 -- # return 1 00:03:28.121 14:04:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:28.121 14:04:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:28.121 14:04:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:28.121 14:04:29 -- setup/common.sh@80 -- # echo 1073741824 00:03:28.121 14:04:29 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:03:28.121 14:04:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.121 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:28.121 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:28.121 14:04:29 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:28.121 14:04:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:28.121 14:04:29 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:28.121 14:04:29 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:28.121 14:04:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:28.121 No valid GPT data, bailing 00:03:28.121 14:04:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:28.121 14:04:29 -- scripts/common.sh@393 -- # pt= 00:03:28.121 14:04:29 -- scripts/common.sh@394 -- # return 1 00:03:28.121 14:04:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:28.121 14:04:29 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:28.121 14:04:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:28.121 14:04:29 -- setup/common.sh@80 -- # echo 4294967296 00:03:28.121 14:04:29 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:28.121 14:04:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.121 14:04:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:28.122 14:04:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.122 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:28.122 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:28.122 14:04:29 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:28.122 14:04:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:28.122 14:04:29 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:28.122 14:04:29 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:28.122 14:04:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:28.122 No valid GPT data, bailing 00:03:28.122 14:04:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:28.122 14:04:29 -- scripts/common.sh@393 -- # pt= 00:03:28.122 14:04:29 -- scripts/common.sh@394 -- # return 1 00:03:28.122 14:04:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:28.122 14:04:29 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:28.122 14:04:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:28.122 14:04:29 -- setup/common.sh@80 -- # echo 4294967296 00:03:28.122 14:04:29 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:28.122 14:04:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.122 14:04:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:28.122 14:04:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.122 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:28.122 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:28.122 14:04:29 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:03:28.122 14:04:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:28.122 14:04:29 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:28.122 14:04:29 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:28.122 14:04:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:28.383 No valid GPT data, bailing 00:03:28.384 14:04:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:28.384 14:04:29 -- scripts/common.sh@393 -- # pt= 00:03:28.384 14:04:29 -- scripts/common.sh@394 -- # return 1 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:28.384 14:04:29 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:28.384 14:04:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:28.384 14:04:29 -- setup/common.sh@80 -- # echo 4294967296 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:28.384 14:04:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.384 14:04:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:03:28.384 14:04:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.384 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:28.384 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:28.384 14:04:29 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:28.384 14:04:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:28.384 14:04:29 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:03:28.384 14:04:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:03:28.384 No valid GPT data, bailing 00:03:28.384 14:04:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:28.384 14:04:29 -- scripts/common.sh@393 -- # pt= 00:03:28.384 14:04:29 -- scripts/common.sh@394 -- # return 1 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:28.384 14:04:29 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:28.384 14:04:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:28.384 14:04:29 -- setup/common.sh@80 -- # echo 6343335936 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:03:28.384 14:04:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.384 14:04:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:28.384 14:04:29 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:28.384 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:03:28.384 14:04:29 -- setup/devices.sh@201 -- # ctrl=nvme3 00:03:28.384 14:04:29 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:28.384 14:04:29 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:03:28.384 14:04:29 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:03:28.384 14:04:29 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:03:28.384 No valid GPT data, bailing 00:03:28.384 14:04:29 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:28.384 14:04:29 -- scripts/common.sh@393 -- # pt= 00:03:28.384 14:04:29 -- scripts/common.sh@394 -- # return 1 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:03:28.384 14:04:29 -- setup/common.sh@76 -- # local dev=nvme3n1 00:03:28.384 14:04:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:03:28.384 14:04:29 -- setup/common.sh@80 -- # echo 5368709120 00:03:28.384 14:04:29 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:28.384 14:04:29 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:28.384 14:04:29 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:28.384 14:04:29 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:03:28.384 14:04:29 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:03:28.384 14:04:29 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:28.384 14:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.384 14:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.384 14:04:29 -- common/autotest_common.sh@10 -- # set +x 00:03:28.384 ************************************ 00:03:28.384 START TEST nvme_mount 00:03:28.384 ************************************ 00:03:28.384 14:04:29 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:28.384 14:04:29 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:03:28.384 14:04:29 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:03:28.384 14:04:29 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:28.384 14:04:29 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:28.384 14:04:29 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:03:28.384 14:04:29 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:28.384 14:04:29 -- setup/common.sh@40 -- # local part_no=1 00:03:28.384 14:04:29 -- setup/common.sh@41 -- # local size=1073741824 00:03:28.384 14:04:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:28.384 14:04:29 -- setup/common.sh@44 -- # parts=() 00:03:28.384 14:04:29 -- setup/common.sh@44 -- # local parts 00:03:28.384 14:04:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:28.384 14:04:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.384 14:04:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:28.384 14:04:29 -- setup/common.sh@46 -- # (( part++ )) 00:03:28.384 14:04:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:28.384 14:04:29 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:28.384 14:04:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:28.384 14:04:29 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:03:29.773 Creating new GPT entries in memory. 00:03:29.773 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:29.773 other utilities. 00:03:29.773 14:04:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:29.773 14:04:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.773 14:04:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:29.773 14:04:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:29.773 14:04:30 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:03:30.719 Creating new GPT entries in memory. 00:03:30.719 The operation has completed successfully. 00:03:30.719 14:04:31 -- setup/common.sh@57 -- # (( part++ )) 00:03:30.719 14:04:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.719 14:04:31 -- setup/common.sh@62 -- # wait 53701 00:03:30.719 14:04:31 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.719 14:04:31 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:30.719 14:04:31 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.719 14:04:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:03:30.719 14:04:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:03:30.719 14:04:31 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.719 14:04:32 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:30.719 14:04:32 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:30.719 14:04:32 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:03:30.719 14:04:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.719 14:04:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:30.719 14:04:32 -- setup/devices.sh@53 -- # local found=0 00:03:30.719 14:04:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.719 14:04:32 -- setup/devices.sh@56 -- # : 00:03:30.719 14:04:32 -- setup/devices.sh@59 -- # local pci status 00:03:30.719 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.719 14:04:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:30.719 14:04:32 -- setup/devices.sh@47 -- # setup output config 00:03:30.719 14:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.719 14:04:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:30.719 14:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:30.719 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.980 14:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:30.980 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.241 14:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:31.241 14:04:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:03:31.241 14:04:32 -- setup/devices.sh@63 -- # found=1 00:03:31.241 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.241 14:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:31.241 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.241 14:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:31.241 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.241 14:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:31.241 14:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.503 14:04:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.503 14:04:32 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:31.503 14:04:32 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.503 14:04:32 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.503 14:04:32 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:31.503 14:04:32 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:31.503 14:04:32 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.503 14:04:32 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.503 14:04:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:31.503 14:04:32 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:03:31.503 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.503 14:04:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:31.503 14:04:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:31.765 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:31.765 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:31.765 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:31.765 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:03:31.765 14:04:33 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:31.765 14:04:33 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:31.765 14:04:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.765 14:04:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:03:31.765 14:04:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:03:31.765 14:04:33 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.765 14:04:33 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:31.765 14:04:33 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:31.765 14:04:33 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:03:31.765 14:04:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.765 14:04:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:31.765 14:04:33 -- setup/devices.sh@53 -- # local found=0 00:03:31.765 14:04:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.765 14:04:33 -- setup/devices.sh@56 -- # : 00:03:31.765 14:04:33 -- setup/devices.sh@59 -- # local pci status 00:03:31.765 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.765 14:04:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:31.766 14:04:33 -- setup/devices.sh@47 -- # setup output config 00:03:31.766 14:04:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.766 14:04:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.027 14:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.027 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.027 14:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.027 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.289 14:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.289 14:04:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:03:32.289 14:04:33 -- setup/devices.sh@63 -- # found=1 00:03:32.289 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.289 14:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.289 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.551 14:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.551 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.551 14:04:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.551 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.551 14:04:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.551 14:04:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:32.551 14:04:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.551 14:04:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.551 14:04:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.551 14:04:33 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.551 14:04:33 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:03:32.551 14:04:33 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:32.551 14:04:33 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:03:32.551 14:04:33 -- setup/devices.sh@50 -- # local mount_point= 00:03:32.551 14:04:33 -- setup/devices.sh@51 -- # local test_file= 00:03:32.551 14:04:33 -- setup/devices.sh@53 -- # local found=0 00:03:32.551 14:04:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:32.551 14:04:33 -- setup/devices.sh@59 -- # local pci status 00:03:32.551 14:04:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.551 14:04:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:32.551 14:04:33 -- setup/devices.sh@47 -- # setup output config 00:03:32.551 14:04:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.551 14:04:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.813 14:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.813 14:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.813 14:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:32.813 14:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.074 14:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:33.074 14:04:34 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:03:33.074 14:04:34 -- setup/devices.sh@63 -- # found=1 00:03:33.074 14:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.074 14:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:33.074 14:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.336 14:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:33.336 14:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.336 14:04:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:33.336 14:04:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.598 14:04:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.598 14:04:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:33.598 14:04:34 -- setup/devices.sh@68 -- # return 0 00:03:33.598 14:04:34 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:33.598 14:04:34 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.598 14:04:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:33.598 14:04:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:33.598 14:04:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:33.598 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.598 00:03:33.598 real 0m5.028s 00:03:33.598 user 0m0.990s 00:03:33.598 sys 0m1.304s 00:03:33.598 14:04:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:33.598 ************************************ 00:03:33.598 END TEST nvme_mount 00:03:33.598 ************************************ 00:03:33.598 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:33.598 14:04:34 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:33.598 14:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:33.598 14:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:33.598 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:33.598 ************************************ 00:03:33.598 START TEST dm_mount 00:03:33.598 ************************************ 00:03:33.598 14:04:34 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:33.598 14:04:34 -- setup/devices.sh@144 -- # pv=nvme1n1 00:03:33.599 14:04:34 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:03:33.599 14:04:34 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:03:33.599 14:04:34 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:03:33.599 14:04:34 -- setup/common.sh@39 -- # local disk=nvme1n1 00:03:33.599 14:04:34 -- setup/common.sh@40 -- # local part_no=2 00:03:33.599 14:04:34 -- setup/common.sh@41 -- # local size=1073741824 00:03:33.599 14:04:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:33.599 14:04:34 -- setup/common.sh@44 -- # parts=() 00:03:33.599 14:04:34 -- setup/common.sh@44 -- # local parts 00:03:33.599 14:04:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:33.599 14:04:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.599 14:04:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.599 14:04:34 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.599 14:04:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.599 14:04:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.599 14:04:34 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.599 14:04:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.599 14:04:34 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:33.599 14:04:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:03:33.599 14:04:34 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:03:34.610 Creating new GPT entries in memory. 00:03:34.610 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:34.610 other utilities. 00:03:34.610 14:04:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:34.610 14:04:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.610 14:04:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.610 14:04:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.610 14:04:35 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:03:35.554 Creating new GPT entries in memory. 00:03:35.554 The operation has completed successfully. 00:03:35.554 14:04:36 -- setup/common.sh@57 -- # (( part++ )) 00:03:35.554 14:04:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.554 14:04:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.554 14:04:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.554 14:04:36 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:03:36.942 The operation has completed successfully. 00:03:36.942 14:04:38 -- setup/common.sh@57 -- # (( part++ )) 00:03:36.942 14:04:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.942 14:04:38 -- setup/common.sh@62 -- # wait 54329 00:03:36.942 14:04:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:36.942 14:04:38 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.942 14:04:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:36.942 14:04:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:36.942 14:04:38 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:36.942 14:04:38 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.942 14:04:38 -- setup/devices.sh@161 -- # break 00:03:36.942 14:04:38 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.942 14:04:38 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:36.942 14:04:38 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:36.942 14:04:38 -- setup/devices.sh@166 -- # dm=dm-0 00:03:36.942 14:04:38 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:03:36.942 14:04:38 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:03:36.942 14:04:38 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.942 14:04:38 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:36.942 14:04:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.942 14:04:38 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:36.942 14:04:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:36.942 14:04:38 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.942 14:04:38 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:36.942 14:04:38 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:36.942 14:04:38 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:03:36.942 14:04:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:36.942 14:04:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:36.943 14:04:38 -- setup/devices.sh@53 -- # local found=0 00:03:36.943 14:04:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:36.943 14:04:38 -- setup/devices.sh@56 -- # : 00:03:36.943 14:04:38 -- setup/devices.sh@59 -- # local pci status 00:03:36.943 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.943 14:04:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:36.943 14:04:38 -- setup/devices.sh@47 -- # setup output config 00:03:36.943 14:04:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.943 14:04:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:36.943 14:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:36.943 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.204 14:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.204 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.466 14:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.466 14:04:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:37.466 14:04:38 -- setup/devices.sh@63 -- # found=1 00:03:37.466 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.466 14:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.466 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.466 14:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.466 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.466 14:04:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.466 14:04:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.726 14:04:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.726 14:04:38 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:37.726 14:04:38 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.726 14:04:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.726 14:04:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:37.726 14:04:39 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.726 14:04:39 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:03:37.726 14:04:39 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:03:37.726 14:04:39 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:03:37.726 14:04:39 -- setup/devices.sh@50 -- # local mount_point= 00:03:37.726 14:04:39 -- setup/devices.sh@51 -- # local test_file= 00:03:37.726 14:04:39 -- setup/devices.sh@53 -- # local found=0 00:03:37.726 14:04:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.726 14:04:39 -- setup/devices.sh@59 -- # local pci status 00:03:37.726 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.726 14:04:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:03:37.726 14:04:39 -- setup/devices.sh@47 -- # setup output config 00:03:37.726 14:04:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.726 14:04:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:37.726 14:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.726 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.988 14:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:37.988 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.249 14:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:38.249 14:04:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:03:38.249 14:04:39 -- setup/devices.sh@63 -- # found=1 00:03:38.249 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.249 14:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:38.249 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.249 14:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:38.249 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.511 14:04:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:03:38.511 14:04:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.511 14:04:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.511 14:04:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.511 14:04:39 -- setup/devices.sh@68 -- # return 0 00:03:38.511 14:04:39 -- setup/devices.sh@187 -- # cleanup_dm 00:03:38.511 14:04:39 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.511 14:04:39 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.511 14:04:39 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:38.511 14:04:39 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:38.511 14:04:39 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:03:38.511 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.511 14:04:39 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:03:38.511 14:04:39 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:03:38.511 00:03:38.511 real 0m4.969s 00:03:38.511 user 0m0.657s 00:03:38.511 sys 0m0.905s 00:03:38.511 14:04:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:38.511 ************************************ 00:03:38.511 END TEST dm_mount 00:03:38.511 ************************************ 00:03:38.511 14:04:39 -- common/autotest_common.sh@10 -- # set +x 00:03:38.511 14:04:39 -- setup/devices.sh@1 -- # cleanup 00:03:38.511 14:04:39 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:38.511 14:04:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:38.511 14:04:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:38.511 14:04:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:03:38.511 14:04:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:03:38.511 14:04:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:03:39.084 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:39.084 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:39.084 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:39.084 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:03:39.084 14:04:40 -- setup/devices.sh@12 -- # cleanup_dm 00:03:39.084 14:04:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:39.084 14:04:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:39.084 14:04:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:03:39.084 14:04:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:03:39.084 14:04:40 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:03:39.084 14:04:40 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:03:39.084 ************************************ 00:03:39.084 END TEST devices 00:03:39.084 ************************************ 00:03:39.084 00:03:39.084 real 0m12.179s 00:03:39.084 user 0m2.476s 00:03:39.084 sys 0m2.895s 00:03:39.084 14:04:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.084 14:04:40 -- common/autotest_common.sh@10 -- # set +x 00:03:39.084 ************************************ 00:03:39.084 END TEST setup.sh 00:03:39.084 ************************************ 00:03:39.084 00:03:39.084 real 0m41.844s 00:03:39.084 user 0m8.137s 00:03:39.084 sys 0m10.920s 00:03:39.084 14:04:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.084 14:04:40 -- common/autotest_common.sh@10 -- # set +x 00:03:39.084 14:04:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:39.084 Hugepages 00:03:39.084 node hugesize free / total 00:03:39.084 node0 1048576kB 0 / 0 00:03:39.084 node0 2048kB 2048 / 2048 00:03:39.084 00:03:39.084 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.346 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:39.346 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:39.346 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:39.346 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:39.608 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:39.608 14:04:40 -- spdk/autotest.sh@128 -- # uname -s 00:03:39.608 14:04:40 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:03:39.608 14:04:40 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:03:39.608 14:04:40 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:40.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.553 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.553 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.553 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.553 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.814 14:04:42 -- common/autotest_common.sh@1527 -- # sleep 1 00:03:41.756 14:04:43 -- common/autotest_common.sh@1528 -- # bdfs=() 00:03:41.756 14:04:43 -- common/autotest_common.sh@1528 -- # local bdfs 00:03:41.756 14:04:43 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:03:41.756 14:04:43 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:03:41.756 14:04:43 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:41.756 14:04:43 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:41.756 14:04:43 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.756 14:04:43 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:41.756 14:04:43 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:41.756 14:04:43 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:03:41.756 14:04:43 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:41.756 14:04:43 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.329 Waiting for block devices as requested 00:03:42.329 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:03:42.329 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:03:42.329 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:42.590 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:47.911 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:03:47.911 14:04:48 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:47.911 14:04:48 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:47.911 14:04:48 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:03:47.911 14:04:48 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:47.911 14:04:48 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:03:47.911 14:04:48 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:03:47.911 14:04:48 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:47.911 14:04:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:47.911 14:04:48 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:47.911 14:04:48 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:47.911 14:04:48 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:47.911 14:04:48 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:47.911 14:04:48 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:47.911 14:04:48 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:47.911 14:04:48 -- common/autotest_common.sh@1552 -- # continue 00:03:47.911 14:04:48 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:47.911 14:04:48 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:47.911 14:04:48 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:47.911 14:04:49 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:47.911 14:04:49 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:47.911 14:04:49 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1552 -- # continue 00:03:47.911 14:04:49 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:47.911 14:04:49 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:03:47.911 14:04:49 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:03:47.911 14:04:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:03:47.911 14:04:49 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:03:47.911 14:04:49 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:47.911 14:04:49 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:47.911 14:04:49 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:47.911 14:04:49 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:47.911 14:04:49 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1552 -- # continue 00:03:47.911 14:04:49 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:47.911 14:04:49 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:03:47.911 14:04:49 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:47.911 14:04:49 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:47.912 14:04:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:47.912 14:04:49 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:47.912 14:04:49 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:47.912 14:04:49 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:47.912 14:04:49 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:03:47.912 14:04:49 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:47.912 14:04:49 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:47.912 14:04:49 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:47.912 14:04:49 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:47.912 14:04:49 -- common/autotest_common.sh@1552 -- # continue 00:03:47.912 14:04:49 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:03:47.912 14:04:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.912 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:47.912 14:04:49 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:03:47.912 14:04:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.912 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:03:47.912 14:04:49 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.855 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:48.855 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:48.855 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:03:48.855 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:03:49.117 14:04:50 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:03:49.117 14:04:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.117 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.117 14:04:50 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:03:49.117 14:04:50 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:03:49.117 14:04:50 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:03:49.117 14:04:50 -- common/autotest_common.sh@1572 -- # bdfs=() 00:03:49.117 14:04:50 -- common/autotest_common.sh@1572 -- # local bdfs 00:03:49.117 14:04:50 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:03:49.117 14:04:50 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:49.117 14:04:50 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:49.117 14:04:50 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.117 14:04:50 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:49.117 14:04:50 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:49.117 14:04:50 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:03:49.117 14:04:50 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:03:49.117 14:04:50 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:49.117 14:04:50 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:49.117 14:04:50 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:49.117 14:04:50 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:49.117 14:04:50 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:49.117 14:04:50 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:49.117 14:04:50 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:03:49.117 14:04:50 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:49.117 14:04:50 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:49.117 14:04:50 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:03:49.117 14:04:50 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:03:49.117 14:04:50 -- common/autotest_common.sh@1588 -- # return 0 00:03:49.117 14:04:50 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:03:49.117 14:04:50 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:03:49.117 14:04:50 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:49.117 14:04:50 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:49.117 14:04:50 -- spdk/autotest.sh@160 -- # timing_enter lib 00:03:49.117 14:04:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.117 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.117 14:04:50 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:49.117 14:04:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.117 14:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.117 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.117 ************************************ 00:03:49.117 START TEST env 00:03:49.117 ************************************ 00:03:49.117 14:04:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:49.117 * Looking for test storage... 00:03:49.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:49.117 14:04:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:49.118 14:04:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:49.118 14:04:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:49.380 14:04:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:49.380 14:04:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:49.380 14:04:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:49.380 14:04:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:49.380 14:04:50 -- scripts/common.sh@335 -- # IFS=.-: 00:03:49.380 14:04:50 -- scripts/common.sh@335 -- # read -ra ver1 00:03:49.380 14:04:50 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.380 14:04:50 -- scripts/common.sh@336 -- # read -ra ver2 00:03:49.380 14:04:50 -- scripts/common.sh@337 -- # local 'op=<' 00:03:49.380 14:04:50 -- scripts/common.sh@339 -- # ver1_l=2 00:03:49.380 14:04:50 -- scripts/common.sh@340 -- # ver2_l=1 00:03:49.380 14:04:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:49.380 14:04:50 -- scripts/common.sh@343 -- # case "$op" in 00:03:49.380 14:04:50 -- scripts/common.sh@344 -- # : 1 00:03:49.380 14:04:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:49.380 14:04:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.380 14:04:50 -- scripts/common.sh@364 -- # decimal 1 00:03:49.380 14:04:50 -- scripts/common.sh@352 -- # local d=1 00:03:49.380 14:04:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.380 14:04:50 -- scripts/common.sh@354 -- # echo 1 00:03:49.380 14:04:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:49.380 14:04:50 -- scripts/common.sh@365 -- # decimal 2 00:03:49.380 14:04:50 -- scripts/common.sh@352 -- # local d=2 00:03:49.380 14:04:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.380 14:04:50 -- scripts/common.sh@354 -- # echo 2 00:03:49.380 14:04:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:49.380 14:04:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:49.380 14:04:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:49.380 14:04:50 -- scripts/common.sh@367 -- # return 0 00:03:49.380 14:04:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.380 14:04:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:49.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.380 --rc genhtml_branch_coverage=1 00:03:49.380 --rc genhtml_function_coverage=1 00:03:49.380 --rc genhtml_legend=1 00:03:49.380 --rc geninfo_all_blocks=1 00:03:49.380 --rc geninfo_unexecuted_blocks=1 00:03:49.380 00:03:49.380 ' 00:03:49.380 14:04:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:49.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.380 --rc genhtml_branch_coverage=1 00:03:49.380 --rc genhtml_function_coverage=1 00:03:49.380 --rc genhtml_legend=1 00:03:49.380 --rc geninfo_all_blocks=1 00:03:49.380 --rc geninfo_unexecuted_blocks=1 00:03:49.380 00:03:49.380 ' 00:03:49.380 14:04:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:49.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.380 --rc genhtml_branch_coverage=1 00:03:49.380 --rc genhtml_function_coverage=1 00:03:49.380 --rc genhtml_legend=1 00:03:49.380 --rc geninfo_all_blocks=1 00:03:49.380 --rc geninfo_unexecuted_blocks=1 00:03:49.380 00:03:49.380 ' 00:03:49.380 14:04:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:49.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.380 --rc genhtml_branch_coverage=1 00:03:49.380 --rc genhtml_function_coverage=1 00:03:49.380 --rc genhtml_legend=1 00:03:49.380 --rc geninfo_all_blocks=1 00:03:49.381 --rc geninfo_unexecuted_blocks=1 00:03:49.381 00:03:49.381 ' 00:03:49.381 14:04:50 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:49.381 14:04:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.381 14:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.381 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.381 ************************************ 00:03:49.381 START TEST env_memory 00:03:49.381 ************************************ 00:03:49.381 14:04:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:49.381 00:03:49.381 00:03:49.381 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.381 http://cunit.sourceforge.net/ 00:03:49.381 00:03:49.381 00:03:49.381 Suite: memory 00:03:49.381 Test: alloc and free memory map ...[2024-12-04 14:04:50.694883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:49.381 passed 00:03:49.381 Test: mem map translation ...[2024-12-04 14:04:50.734034] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:49.381 [2024-12-04 14:04:50.734219] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:49.381 [2024-12-04 14:04:50.734358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:49.381 [2024-12-04 14:04:50.734428] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:49.381 passed 00:03:49.381 Test: mem map registration ...[2024-12-04 14:04:50.803199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:49.381 [2024-12-04 14:04:50.803348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:49.381 passed 00:03:49.642 Test: mem map adjacent registrations ...passed 00:03:49.642 00:03:49.642 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.642 suites 1 1 n/a 0 0 00:03:49.642 tests 4 4 4 0 0 00:03:49.642 asserts 152 152 152 0 n/a 00:03:49.642 00:03:49.642 Elapsed time = 0.234 seconds 00:03:49.642 00:03:49.642 real 0m0.273s 00:03:49.642 user 0m0.247s 00:03:49.642 sys 0m0.016s 00:03:49.642 14:04:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:49.642 ************************************ 00:03:49.642 END TEST env_memory 00:03:49.642 ************************************ 00:03:49.642 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.642 14:04:50 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:49.642 14:04:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:49.642 14:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:49.642 14:04:50 -- common/autotest_common.sh@10 -- # set +x 00:03:49.642 ************************************ 00:03:49.642 START TEST env_vtophys 00:03:49.642 ************************************ 00:03:49.642 14:04:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:49.642 EAL: lib.eal log level changed from notice to debug 00:03:49.642 EAL: Detected lcore 0 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 1 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 2 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 3 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 4 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 5 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 6 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 7 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 8 as core 0 on socket 0 00:03:49.642 EAL: Detected lcore 9 as core 0 on socket 0 00:03:49.642 EAL: Maximum logical cores by configuration: 128 00:03:49.642 EAL: Detected CPU lcores: 10 00:03:49.642 EAL: Detected NUMA nodes: 1 00:03:49.642 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:49.642 EAL: Detected shared linkage of DPDK 00:03:49.642 EAL: No shared files mode enabled, IPC will be disabled 00:03:49.642 EAL: Selected IOVA mode 'PA' 00:03:49.642 EAL: Probing VFIO support... 00:03:49.642 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:49.642 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:49.642 EAL: Ask a virtual area of 0x2e000 bytes 00:03:49.642 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:49.642 EAL: Setting up physically contiguous memory... 00:03:49.642 EAL: Setting maximum number of open files to 524288 00:03:49.642 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:49.642 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:49.642 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.642 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:49.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.642 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.642 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:49.642 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:49.642 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.642 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:49.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.642 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.642 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:49.642 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:49.642 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.642 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:49.642 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.642 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.643 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:49.643 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:49.643 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.643 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:49.643 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.643 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.643 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:49.643 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:49.643 EAL: Hugepages will be freed exactly as allocated. 00:03:49.643 EAL: No shared files mode enabled, IPC is disabled 00:03:49.643 EAL: No shared files mode enabled, IPC is disabled 00:03:49.904 EAL: TSC frequency is ~2600000 KHz 00:03:49.904 EAL: Main lcore 0 is ready (tid=7ff97511ea40;cpuset=[0]) 00:03:49.904 EAL: Trying to obtain current memory policy. 00:03:49.904 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.904 EAL: Restoring previous memory policy: 0 00:03:49.904 EAL: request: mp_malloc_sync 00:03:49.904 EAL: No shared files mode enabled, IPC is disabled 00:03:49.904 EAL: Heap on socket 0 was expanded by 2MB 00:03:49.904 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:49.904 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:49.904 EAL: Mem event callback 'spdk:(nil)' registered 00:03:49.904 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:49.904 00:03:49.904 00:03:49.904 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.904 http://cunit.sourceforge.net/ 00:03:49.904 00:03:49.904 00:03:49.904 Suite: components_suite 00:03:50.166 Test: vtophys_malloc_test ...passed 00:03:50.166 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:50.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.166 EAL: Restoring previous memory policy: 4 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was expanded by 4MB 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was shrunk by 4MB 00:03:50.166 EAL: Trying to obtain current memory policy. 00:03:50.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.166 EAL: Restoring previous memory policy: 4 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was expanded by 6MB 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was shrunk by 6MB 00:03:50.166 EAL: Trying to obtain current memory policy. 00:03:50.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.166 EAL: Restoring previous memory policy: 4 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was expanded by 10MB 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was shrunk by 10MB 00:03:50.166 EAL: Trying to obtain current memory policy. 00:03:50.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.166 EAL: Restoring previous memory policy: 4 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was expanded by 18MB 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was shrunk by 18MB 00:03:50.166 EAL: Trying to obtain current memory policy. 00:03:50.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.166 EAL: Restoring previous memory policy: 4 00:03:50.166 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.166 EAL: request: mp_malloc_sync 00:03:50.166 EAL: No shared files mode enabled, IPC is disabled 00:03:50.166 EAL: Heap on socket 0 was expanded by 34MB 00:03:50.429 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.429 EAL: request: mp_malloc_sync 00:03:50.429 EAL: No shared files mode enabled, IPC is disabled 00:03:50.429 EAL: Heap on socket 0 was shrunk by 34MB 00:03:50.429 EAL: Trying to obtain current memory policy. 00:03:50.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.429 EAL: Restoring previous memory policy: 4 00:03:50.429 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.429 EAL: request: mp_malloc_sync 00:03:50.429 EAL: No shared files mode enabled, IPC is disabled 00:03:50.429 EAL: Heap on socket 0 was expanded by 66MB 00:03:50.429 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.429 EAL: request: mp_malloc_sync 00:03:50.429 EAL: No shared files mode enabled, IPC is disabled 00:03:50.429 EAL: Heap on socket 0 was shrunk by 66MB 00:03:50.429 EAL: Trying to obtain current memory policy. 00:03:50.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.429 EAL: Restoring previous memory policy: 4 00:03:50.429 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.429 EAL: request: mp_malloc_sync 00:03:50.429 EAL: No shared files mode enabled, IPC is disabled 00:03:50.429 EAL: Heap on socket 0 was expanded by 130MB 00:03:50.691 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.691 EAL: request: mp_malloc_sync 00:03:50.691 EAL: No shared files mode enabled, IPC is disabled 00:03:50.691 EAL: Heap on socket 0 was shrunk by 130MB 00:03:50.953 EAL: Trying to obtain current memory policy. 00:03:50.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.953 EAL: Restoring previous memory policy: 4 00:03:50.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.953 EAL: request: mp_malloc_sync 00:03:50.953 EAL: No shared files mode enabled, IPC is disabled 00:03:50.953 EAL: Heap on socket 0 was expanded by 258MB 00:03:51.215 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.215 EAL: request: mp_malloc_sync 00:03:51.215 EAL: No shared files mode enabled, IPC is disabled 00:03:51.215 EAL: Heap on socket 0 was shrunk by 258MB 00:03:51.480 EAL: Trying to obtain current memory policy. 00:03:51.480 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.742 EAL: Restoring previous memory policy: 4 00:03:51.742 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.742 EAL: request: mp_malloc_sync 00:03:51.742 EAL: No shared files mode enabled, IPC is disabled 00:03:51.742 EAL: Heap on socket 0 was expanded by 514MB 00:03:52.313 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.313 EAL: request: mp_malloc_sync 00:03:52.313 EAL: No shared files mode enabled, IPC is disabled 00:03:52.313 EAL: Heap on socket 0 was shrunk by 514MB 00:03:52.886 EAL: Trying to obtain current memory policy. 00:03:52.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.147 EAL: Restoring previous memory policy: 4 00:03:53.147 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.147 EAL: request: mp_malloc_sync 00:03:53.147 EAL: No shared files mode enabled, IPC is disabled 00:03:53.147 EAL: Heap on socket 0 was expanded by 1026MB 00:03:54.526 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.526 EAL: request: mp_malloc_sync 00:03:54.526 EAL: No shared files mode enabled, IPC is disabled 00:03:54.526 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.121 passed 00:03:55.121 00:03:55.121 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.121 suites 1 1 n/a 0 0 00:03:55.121 tests 2 2 2 0 0 00:03:55.121 asserts 5327 5327 5327 0 n/a 00:03:55.121 00:03:55.121 Elapsed time = 5.218 seconds 00:03:55.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.122 EAL: request: mp_malloc_sync 00:03:55.122 EAL: No shared files mode enabled, IPC is disabled 00:03:55.122 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.122 EAL: No shared files mode enabled, IPC is disabled 00:03:55.122 EAL: No shared files mode enabled, IPC is disabled 00:03:55.122 EAL: No shared files mode enabled, IPC is disabled 00:03:55.122 00:03:55.122 real 0m5.493s 00:03:55.122 user 0m4.439s 00:03:55.122 sys 0m0.899s 00:03:55.122 14:04:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:55.122 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.122 ************************************ 00:03:55.122 END TEST env_vtophys 00:03:55.122 ************************************ 00:03:55.122 14:04:56 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.122 14:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.122 14:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.122 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.122 ************************************ 00:03:55.122 START TEST env_pci 00:03:55.122 ************************************ 00:03:55.122 14:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.122 00:03:55.122 00:03:55.122 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.122 http://cunit.sourceforge.net/ 00:03:55.122 00:03:55.122 00:03:55.122 Suite: pci 00:03:55.122 Test: pci_hook ...[2024-12-04 14:04:56.551489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56045 has claimed it 00:03:55.122 passed 00:03:55.122 00:03:55.122 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.122 suites 1 1 n/a 0 0 00:03:55.122 tests 1 1 1 0 0 00:03:55.122 asserts 25 25 25 0 n/a 00:03:55.122 00:03:55.122 Elapsed time = 0.006 seconds 00:03:55.122 EAL: Cannot find device (10000:00:01.0) 00:03:55.122 EAL: Failed to attach device on primary process 00:03:55.381 00:03:55.382 real 0m0.068s 00:03:55.382 user 0m0.033s 00:03:55.382 sys 0m0.033s 00:03:55.382 14:04:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:55.382 ************************************ 00:03:55.382 END TEST env_pci 00:03:55.382 ************************************ 00:03:55.382 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.382 14:04:56 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.382 14:04:56 -- env/env.sh@15 -- # uname 00:03:55.382 14:04:56 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.382 14:04:56 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.382 14:04:56 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.382 14:04:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:55.382 14:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.382 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.382 ************************************ 00:03:55.382 START TEST env_dpdk_post_init 00:03:55.382 ************************************ 00:03:55.382 14:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.382 EAL: Detected CPU lcores: 10 00:03:55.382 EAL: Detected NUMA nodes: 1 00:03:55.382 EAL: Detected shared linkage of DPDK 00:03:55.382 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.382 EAL: Selected IOVA mode 'PA' 00:03:55.382 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.641 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:55.641 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:55.641 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:03:55.641 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:03:55.641 Starting DPDK initialization... 00:03:55.641 Starting SPDK post initialization... 00:03:55.641 SPDK NVMe probe 00:03:55.641 Attaching to 0000:00:06.0 00:03:55.641 Attaching to 0000:00:07.0 00:03:55.641 Attaching to 0000:00:08.0 00:03:55.641 Attaching to 0000:00:09.0 00:03:55.641 Attached to 0000:00:06.0 00:03:55.641 Attached to 0000:00:07.0 00:03:55.641 Attached to 0000:00:09.0 00:03:55.641 Attached to 0000:00:08.0 00:03:55.641 Cleaning up... 00:03:55.641 ************************************ 00:03:55.641 END TEST env_dpdk_post_init 00:03:55.641 ************************************ 00:03:55.641 00:03:55.641 real 0m0.238s 00:03:55.641 user 0m0.055s 00:03:55.641 sys 0m0.083s 00:03:55.641 14:04:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:55.641 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.641 14:04:56 -- env/env.sh@26 -- # uname 00:03:55.641 14:04:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.641 14:04:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.641 14:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.641 14:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.641 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:03:55.641 ************************************ 00:03:55.641 START TEST env_mem_callbacks 00:03:55.641 ************************************ 00:03:55.641 14:04:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.641 EAL: Detected CPU lcores: 10 00:03:55.641 EAL: Detected NUMA nodes: 1 00:03:55.641 EAL: Detected shared linkage of DPDK 00:03:55.641 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.641 EAL: Selected IOVA mode 'PA' 00:03:55.903 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.903 00:03:55.903 00:03:55.903 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.903 http://cunit.sourceforge.net/ 00:03:55.903 00:03:55.903 00:03:55.903 Suite: memory 00:03:55.903 Test: test ... 00:03:55.903 register 0x200000200000 2097152 00:03:55.903 malloc 3145728 00:03:55.903 register 0x200000400000 4194304 00:03:55.903 buf 0x2000004fffc0 len 3145728 PASSED 00:03:55.903 malloc 64 00:03:55.903 buf 0x2000004ffec0 len 64 PASSED 00:03:55.903 malloc 4194304 00:03:55.903 register 0x200000800000 6291456 00:03:55.903 buf 0x2000009fffc0 len 4194304 PASSED 00:03:55.903 free 0x2000004fffc0 3145728 00:03:55.903 free 0x2000004ffec0 64 00:03:55.903 unregister 0x200000400000 4194304 PASSED 00:03:55.903 free 0x2000009fffc0 4194304 00:03:55.903 unregister 0x200000800000 6291456 PASSED 00:03:55.903 malloc 8388608 00:03:55.903 register 0x200000400000 10485760 00:03:55.903 buf 0x2000005fffc0 len 8388608 PASSED 00:03:55.903 free 0x2000005fffc0 8388608 00:03:55.903 unregister 0x200000400000 10485760 PASSED 00:03:55.903 passed 00:03:55.903 00:03:55.903 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.903 suites 1 1 n/a 0 0 00:03:55.903 tests 1 1 1 0 0 00:03:55.903 asserts 15 15 15 0 n/a 00:03:55.903 00:03:55.903 Elapsed time = 0.041 seconds 00:03:55.903 00:03:55.903 real 0m0.217s 00:03:55.903 user 0m0.055s 00:03:55.903 sys 0m0.054s 00:03:55.903 14:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:55.903 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:55.903 ************************************ 00:03:55.903 END TEST env_mem_callbacks 00:03:55.903 ************************************ 00:03:55.903 00:03:55.903 real 0m6.731s 00:03:55.903 user 0m4.986s 00:03:55.903 sys 0m1.293s 00:03:55.903 14:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:55.903 ************************************ 00:03:55.903 END TEST env 00:03:55.903 ************************************ 00:03:55.903 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:55.903 14:04:57 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.903 14:04:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:55.903 14:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:55.903 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:55.903 ************************************ 00:03:55.903 START TEST rpc 00:03:55.903 ************************************ 00:03:55.903 14:04:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.903 * Looking for test storage... 00:03:55.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:55.903 14:04:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:55.903 14:04:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:55.903 14:04:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.164 14:04:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.164 14:04:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.164 14:04:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.164 14:04:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.164 14:04:57 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.164 14:04:57 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.164 14:04:57 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.164 14:04:57 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.164 14:04:57 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.164 14:04:57 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.164 14:04:57 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.164 14:04:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.164 14:04:57 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.164 14:04:57 -- scripts/common.sh@344 -- # : 1 00:03:56.164 14:04:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.164 14:04:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.164 14:04:57 -- scripts/common.sh@364 -- # decimal 1 00:03:56.164 14:04:57 -- scripts/common.sh@352 -- # local d=1 00:03:56.164 14:04:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.164 14:04:57 -- scripts/common.sh@354 -- # echo 1 00:03:56.164 14:04:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.164 14:04:57 -- scripts/common.sh@365 -- # decimal 2 00:03:56.164 14:04:57 -- scripts/common.sh@352 -- # local d=2 00:03:56.164 14:04:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.164 14:04:57 -- scripts/common.sh@354 -- # echo 2 00:03:56.164 14:04:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.164 14:04:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.164 14:04:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.164 14:04:57 -- scripts/common.sh@367 -- # return 0 00:03:56.164 14:04:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.164 14:04:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.164 --rc genhtml_branch_coverage=1 00:03:56.164 --rc genhtml_function_coverage=1 00:03:56.164 --rc genhtml_legend=1 00:03:56.164 --rc geninfo_all_blocks=1 00:03:56.165 --rc geninfo_unexecuted_blocks=1 00:03:56.165 00:03:56.165 ' 00:03:56.165 14:04:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.165 --rc genhtml_branch_coverage=1 00:03:56.165 --rc genhtml_function_coverage=1 00:03:56.165 --rc genhtml_legend=1 00:03:56.165 --rc geninfo_all_blocks=1 00:03:56.165 --rc geninfo_unexecuted_blocks=1 00:03:56.165 00:03:56.165 ' 00:03:56.165 14:04:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.165 --rc genhtml_branch_coverage=1 00:03:56.165 --rc genhtml_function_coverage=1 00:03:56.165 --rc genhtml_legend=1 00:03:56.165 --rc geninfo_all_blocks=1 00:03:56.165 --rc geninfo_unexecuted_blocks=1 00:03:56.165 00:03:56.165 ' 00:03:56.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.165 14:04:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.165 --rc genhtml_branch_coverage=1 00:03:56.165 --rc genhtml_function_coverage=1 00:03:56.165 --rc genhtml_legend=1 00:03:56.165 --rc geninfo_all_blocks=1 00:03:56.165 --rc geninfo_unexecuted_blocks=1 00:03:56.165 00:03:56.165 ' 00:03:56.165 14:04:57 -- rpc/rpc.sh@65 -- # spdk_pid=56171 00:03:56.165 14:04:57 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.165 14:04:57 -- rpc/rpc.sh@67 -- # waitforlisten 56171 00:03:56.165 14:04:57 -- common/autotest_common.sh@829 -- # '[' -z 56171 ']' 00:03:56.165 14:04:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.165 14:04:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:56.165 14:04:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.165 14:04:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:56.165 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.165 14:04:57 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:56.165 [2024-12-04 14:04:57.496344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:56.165 [2024-12-04 14:04:57.496692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56171 ] 00:03:56.423 [2024-12-04 14:04:57.646881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.423 [2024-12-04 14:04:57.798826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:56.423 [2024-12-04 14:04:57.798975] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.423 [2024-12-04 14:04:57.798987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56171' to capture a snapshot of events at runtime. 00:03:56.423 [2024-12-04 14:04:57.798994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56171 for offline analysis/debug. 00:03:56.423 [2024-12-04 14:04:57.799016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.991 14:04:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.991 14:04:58 -- common/autotest_common.sh@862 -- # return 0 00:03:56.991 14:04:58 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.991 14:04:58 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.991 14:04:58 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:56.991 14:04:58 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:56.991 14:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.991 14:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.991 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.991 ************************************ 00:03:56.991 START TEST rpc_integrity 00:03:56.991 ************************************ 00:03:56.991 14:04:58 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:56.991 14:04:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.991 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.991 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.991 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.991 14:04:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.991 14:04:58 -- rpc/rpc.sh@13 -- # jq length 00:03:56.991 14:04:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.991 14:04:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.991 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.991 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.991 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.991 14:04:58 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:56.991 14:04:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.991 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.991 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.991 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.991 14:04:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.991 { 00:03:56.991 "name": "Malloc0", 00:03:56.991 "aliases": [ 00:03:56.991 "d05b73d8-5aa3-4b75-a6cf-b74d1f8afbf1" 00:03:56.991 ], 00:03:56.991 "product_name": "Malloc disk", 00:03:56.991 "block_size": 512, 00:03:56.991 "num_blocks": 16384, 00:03:56.991 "uuid": "d05b73d8-5aa3-4b75-a6cf-b74d1f8afbf1", 00:03:56.991 "assigned_rate_limits": { 00:03:56.991 "rw_ios_per_sec": 0, 00:03:56.991 "rw_mbytes_per_sec": 0, 00:03:56.991 "r_mbytes_per_sec": 0, 00:03:56.991 "w_mbytes_per_sec": 0 00:03:56.991 }, 00:03:56.991 "claimed": false, 00:03:56.991 "zoned": false, 00:03:56.991 "supported_io_types": { 00:03:56.991 "read": true, 00:03:56.991 "write": true, 00:03:56.991 "unmap": true, 00:03:56.991 "write_zeroes": true, 00:03:56.991 "flush": true, 00:03:56.991 "reset": true, 00:03:56.991 "compare": false, 00:03:56.991 "compare_and_write": false, 00:03:56.991 "abort": true, 00:03:56.991 "nvme_admin": false, 00:03:56.991 "nvme_io": false 00:03:56.991 }, 00:03:56.991 "memory_domains": [ 00:03:56.991 { 00:03:56.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.991 "dma_device_type": 2 00:03:56.991 } 00:03:56.991 ], 00:03:56.991 "driver_specific": {} 00:03:56.991 } 00:03:56.991 ]' 00:03:56.991 14:04:58 -- rpc/rpc.sh@17 -- # jq length 00:03:56.991 14:04:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:56.991 14:04:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:56.991 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.991 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.991 [2024-12-04 14:04:58.421014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:56.991 [2024-12-04 14:04:58.421060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.991 [2024-12-04 14:04:58.421077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:03:56.991 [2024-12-04 14:04:58.421093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.991 [2024-12-04 14:04:58.422721] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.991 [2024-12-04 14:04:58.422752] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:56.991 Passthru0 00:03:56.991 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.991 14:04:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:56.991 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:56.991 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:56.991 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:56.991 14:04:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:56.991 { 00:03:56.991 "name": "Malloc0", 00:03:56.991 "aliases": [ 00:03:56.991 "d05b73d8-5aa3-4b75-a6cf-b74d1f8afbf1" 00:03:56.991 ], 00:03:56.991 "product_name": "Malloc disk", 00:03:56.991 "block_size": 512, 00:03:56.991 "num_blocks": 16384, 00:03:56.991 "uuid": "d05b73d8-5aa3-4b75-a6cf-b74d1f8afbf1", 00:03:56.991 "assigned_rate_limits": { 00:03:56.991 "rw_ios_per_sec": 0, 00:03:56.991 "rw_mbytes_per_sec": 0, 00:03:56.991 "r_mbytes_per_sec": 0, 00:03:56.991 "w_mbytes_per_sec": 0 00:03:56.991 }, 00:03:56.991 "claimed": true, 00:03:56.991 "claim_type": "exclusive_write", 00:03:56.991 "zoned": false, 00:03:56.991 "supported_io_types": { 00:03:56.991 "read": true, 00:03:56.991 "write": true, 00:03:56.991 "unmap": true, 00:03:56.991 "write_zeroes": true, 00:03:56.991 "flush": true, 00:03:56.991 "reset": true, 00:03:56.991 "compare": false, 00:03:56.991 "compare_and_write": false, 00:03:56.991 "abort": true, 00:03:56.991 "nvme_admin": false, 00:03:56.991 "nvme_io": false 00:03:56.991 }, 00:03:56.991 "memory_domains": [ 00:03:56.991 { 00:03:56.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.991 "dma_device_type": 2 00:03:56.991 } 00:03:56.991 ], 00:03:56.991 "driver_specific": {} 00:03:56.991 }, 00:03:56.991 { 00:03:56.991 "name": "Passthru0", 00:03:56.991 "aliases": [ 00:03:56.991 "e31835a2-6bd4-5e02-b540-feefe6c5a950" 00:03:56.991 ], 00:03:56.991 "product_name": "passthru", 00:03:56.991 "block_size": 512, 00:03:56.991 "num_blocks": 16384, 00:03:56.991 "uuid": "e31835a2-6bd4-5e02-b540-feefe6c5a950", 00:03:56.991 "assigned_rate_limits": { 00:03:56.991 "rw_ios_per_sec": 0, 00:03:56.991 "rw_mbytes_per_sec": 0, 00:03:56.991 "r_mbytes_per_sec": 0, 00:03:56.991 "w_mbytes_per_sec": 0 00:03:56.991 }, 00:03:56.991 "claimed": false, 00:03:56.991 "zoned": false, 00:03:56.991 "supported_io_types": { 00:03:56.991 "read": true, 00:03:56.991 "write": true, 00:03:56.991 "unmap": true, 00:03:56.991 "write_zeroes": true, 00:03:56.991 "flush": true, 00:03:56.991 "reset": true, 00:03:56.991 "compare": false, 00:03:56.991 "compare_and_write": false, 00:03:56.991 "abort": true, 00:03:56.991 "nvme_admin": false, 00:03:56.991 "nvme_io": false 00:03:56.991 }, 00:03:56.991 "memory_domains": [ 00:03:56.991 { 00:03:56.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.991 "dma_device_type": 2 00:03:56.991 } 00:03:56.991 ], 00:03:56.991 "driver_specific": { 00:03:56.991 "passthru": { 00:03:56.991 "name": "Passthru0", 00:03:56.991 "base_bdev_name": "Malloc0" 00:03:56.991 } 00:03:56.991 } 00:03:56.991 } 00:03:56.991 ]' 00:03:56.991 14:04:58 -- rpc/rpc.sh@21 -- # jq length 00:03:57.251 14:04:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.251 14:04:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.251 14:04:58 -- rpc/rpc.sh@26 -- # jq length 00:03:57.251 ************************************ 00:03:57.251 END TEST rpc_integrity 00:03:57.251 ************************************ 00:03:57.251 14:04:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.251 00:03:57.251 real 0m0.221s 00:03:57.251 user 0m0.137s 00:03:57.251 sys 0m0.017s 00:03:57.251 14:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.251 14:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.251 14:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 ************************************ 00:03:57.251 START TEST rpc_plugins 00:03:57.251 ************************************ 00:03:57.251 14:04:58 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:03:57.251 14:04:58 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.251 14:04:58 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.251 { 00:03:57.251 "name": "Malloc1", 00:03:57.251 "aliases": [ 00:03:57.251 "f1880504-7678-405c-a6aa-e7a66fb19f82" 00:03:57.251 ], 00:03:57.251 "product_name": "Malloc disk", 00:03:57.251 "block_size": 4096, 00:03:57.251 "num_blocks": 256, 00:03:57.251 "uuid": "f1880504-7678-405c-a6aa-e7a66fb19f82", 00:03:57.251 "assigned_rate_limits": { 00:03:57.251 "rw_ios_per_sec": 0, 00:03:57.251 "rw_mbytes_per_sec": 0, 00:03:57.251 "r_mbytes_per_sec": 0, 00:03:57.251 "w_mbytes_per_sec": 0 00:03:57.251 }, 00:03:57.251 "claimed": false, 00:03:57.251 "zoned": false, 00:03:57.251 "supported_io_types": { 00:03:57.251 "read": true, 00:03:57.251 "write": true, 00:03:57.251 "unmap": true, 00:03:57.251 "write_zeroes": true, 00:03:57.251 "flush": true, 00:03:57.251 "reset": true, 00:03:57.251 "compare": false, 00:03:57.251 "compare_and_write": false, 00:03:57.251 "abort": true, 00:03:57.251 "nvme_admin": false, 00:03:57.251 "nvme_io": false 00:03:57.251 }, 00:03:57.251 "memory_domains": [ 00:03:57.251 { 00:03:57.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.251 "dma_device_type": 2 00:03:57.251 } 00:03:57.251 ], 00:03:57.251 "driver_specific": {} 00:03:57.251 } 00:03:57.251 ]' 00:03:57.251 14:04:58 -- rpc/rpc.sh@32 -- # jq length 00:03:57.251 14:04:58 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.251 14:04:58 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.251 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.251 14:04:58 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.251 14:04:58 -- rpc/rpc.sh@36 -- # jq length 00:03:57.251 ************************************ 00:03:57.251 END TEST rpc_plugins 00:03:57.251 ************************************ 00:03:57.251 14:04:58 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.251 00:03:57.251 real 0m0.104s 00:03:57.251 user 0m0.066s 00:03:57.251 sys 0m0.012s 00:03:57.251 14:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.251 14:04:58 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.251 14:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.251 14:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.251 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.510 ************************************ 00:03:57.510 START TEST rpc_trace_cmd_test 00:03:57.510 ************************************ 00:03:57.510 14:04:58 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:03:57.510 14:04:58 -- rpc/rpc.sh@40 -- # local info 00:03:57.510 14:04:58 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.510 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.510 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.510 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.510 14:04:58 -- rpc/rpc.sh@42 -- # info='{ 00:03:57.510 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56171", 00:03:57.510 "tpoint_group_mask": "0x8", 00:03:57.510 "iscsi_conn": { 00:03:57.510 "mask": "0x2", 00:03:57.510 "tpoint_mask": "0x0" 00:03:57.510 }, 00:03:57.510 "scsi": { 00:03:57.510 "mask": "0x4", 00:03:57.510 "tpoint_mask": "0x0" 00:03:57.510 }, 00:03:57.510 "bdev": { 00:03:57.510 "mask": "0x8", 00:03:57.510 "tpoint_mask": "0xffffffffffffffff" 00:03:57.510 }, 00:03:57.510 "nvmf_rdma": { 00:03:57.510 "mask": "0x10", 00:03:57.510 "tpoint_mask": "0x0" 00:03:57.510 }, 00:03:57.510 "nvmf_tcp": { 00:03:57.510 "mask": "0x20", 00:03:57.510 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "ftl": { 00:03:57.511 "mask": "0x40", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "blobfs": { 00:03:57.511 "mask": "0x80", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "dsa": { 00:03:57.511 "mask": "0x200", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "thread": { 00:03:57.511 "mask": "0x400", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "nvme_pcie": { 00:03:57.511 "mask": "0x800", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "iaa": { 00:03:57.511 "mask": "0x1000", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "nvme_tcp": { 00:03:57.511 "mask": "0x2000", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 }, 00:03:57.511 "bdev_nvme": { 00:03:57.511 "mask": "0x4000", 00:03:57.511 "tpoint_mask": "0x0" 00:03:57.511 } 00:03:57.511 }' 00:03:57.511 14:04:58 -- rpc/rpc.sh@43 -- # jq length 00:03:57.511 14:04:58 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:57.511 14:04:58 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:57.511 14:04:58 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:57.511 14:04:58 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:57.511 14:04:58 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:57.511 14:04:58 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:57.511 14:04:58 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:57.511 14:04:58 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:57.511 ************************************ 00:03:57.511 END TEST rpc_trace_cmd_test 00:03:57.511 ************************************ 00:03:57.511 14:04:58 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:57.511 00:03:57.511 real 0m0.169s 00:03:57.511 user 0m0.138s 00:03:57.511 sys 0m0.024s 00:03:57.511 14:04:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.511 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.511 14:04:58 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:57.511 14:04:58 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:57.511 14:04:58 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:57.511 14:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.511 14:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.511 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.511 ************************************ 00:03:57.511 START TEST rpc_daemon_integrity 00:03:57.511 ************************************ 00:03:57.511 14:04:58 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:57.511 14:04:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.511 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.511 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.511 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.511 14:04:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.511 14:04:58 -- rpc/rpc.sh@13 -- # jq length 00:03:57.511 14:04:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.511 14:04:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.511 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.511 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.770 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.770 14:04:58 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:57.770 14:04:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.770 14:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.770 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:03:57.770 14:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.770 14:04:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.770 { 00:03:57.770 "name": "Malloc2", 00:03:57.770 "aliases": [ 00:03:57.770 "5b228a71-2ca9-4bc7-9bd3-b7eac4ea5839" 00:03:57.770 ], 00:03:57.770 "product_name": "Malloc disk", 00:03:57.770 "block_size": 512, 00:03:57.770 "num_blocks": 16384, 00:03:57.770 "uuid": "5b228a71-2ca9-4bc7-9bd3-b7eac4ea5839", 00:03:57.770 "assigned_rate_limits": { 00:03:57.770 "rw_ios_per_sec": 0, 00:03:57.770 "rw_mbytes_per_sec": 0, 00:03:57.770 "r_mbytes_per_sec": 0, 00:03:57.770 "w_mbytes_per_sec": 0 00:03:57.770 }, 00:03:57.770 "claimed": false, 00:03:57.770 "zoned": false, 00:03:57.770 "supported_io_types": { 00:03:57.770 "read": true, 00:03:57.770 "write": true, 00:03:57.770 "unmap": true, 00:03:57.770 "write_zeroes": true, 00:03:57.770 "flush": true, 00:03:57.770 "reset": true, 00:03:57.770 "compare": false, 00:03:57.770 "compare_and_write": false, 00:03:57.770 "abort": true, 00:03:57.770 "nvme_admin": false, 00:03:57.770 "nvme_io": false 00:03:57.770 }, 00:03:57.770 "memory_domains": [ 00:03:57.770 { 00:03:57.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.770 "dma_device_type": 2 00:03:57.770 } 00:03:57.770 ], 00:03:57.770 "driver_specific": {} 00:03:57.770 } 00:03:57.770 ]' 00:03:57.770 14:04:58 -- rpc/rpc.sh@17 -- # jq length 00:03:57.770 14:04:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.770 14:04:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:57.770 14:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.771 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 [2024-12-04 14:04:59.032983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:57.771 [2024-12-04 14:04:59.033111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.771 [2024-12-04 14:04:59.033130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:03:57.771 [2024-12-04 14:04:59.033139] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.771 [2024-12-04 14:04:59.034736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.771 [2024-12-04 14:04:59.034767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.771 Passthru0 00:03:57.771 14:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.771 14:04:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.771 14:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.771 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 14:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.771 14:04:59 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.771 { 00:03:57.771 "name": "Malloc2", 00:03:57.771 "aliases": [ 00:03:57.771 "5b228a71-2ca9-4bc7-9bd3-b7eac4ea5839" 00:03:57.771 ], 00:03:57.771 "product_name": "Malloc disk", 00:03:57.771 "block_size": 512, 00:03:57.771 "num_blocks": 16384, 00:03:57.771 "uuid": "5b228a71-2ca9-4bc7-9bd3-b7eac4ea5839", 00:03:57.771 "assigned_rate_limits": { 00:03:57.771 "rw_ios_per_sec": 0, 00:03:57.771 "rw_mbytes_per_sec": 0, 00:03:57.771 "r_mbytes_per_sec": 0, 00:03:57.771 "w_mbytes_per_sec": 0 00:03:57.771 }, 00:03:57.771 "claimed": true, 00:03:57.771 "claim_type": "exclusive_write", 00:03:57.771 "zoned": false, 00:03:57.771 "supported_io_types": { 00:03:57.771 "read": true, 00:03:57.771 "write": true, 00:03:57.771 "unmap": true, 00:03:57.771 "write_zeroes": true, 00:03:57.771 "flush": true, 00:03:57.771 "reset": true, 00:03:57.771 "compare": false, 00:03:57.771 "compare_and_write": false, 00:03:57.771 "abort": true, 00:03:57.771 "nvme_admin": false, 00:03:57.771 "nvme_io": false 00:03:57.771 }, 00:03:57.771 "memory_domains": [ 00:03:57.771 { 00:03:57.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.771 "dma_device_type": 2 00:03:57.771 } 00:03:57.771 ], 00:03:57.771 "driver_specific": {} 00:03:57.771 }, 00:03:57.771 { 00:03:57.771 "name": "Passthru0", 00:03:57.771 "aliases": [ 00:03:57.771 "796a128a-e01e-5a7a-94d8-adffc90d5986" 00:03:57.771 ], 00:03:57.771 "product_name": "passthru", 00:03:57.771 "block_size": 512, 00:03:57.771 "num_blocks": 16384, 00:03:57.771 "uuid": "796a128a-e01e-5a7a-94d8-adffc90d5986", 00:03:57.771 "assigned_rate_limits": { 00:03:57.771 "rw_ios_per_sec": 0, 00:03:57.771 "rw_mbytes_per_sec": 0, 00:03:57.771 "r_mbytes_per_sec": 0, 00:03:57.771 "w_mbytes_per_sec": 0 00:03:57.771 }, 00:03:57.771 "claimed": false, 00:03:57.771 "zoned": false, 00:03:57.771 "supported_io_types": { 00:03:57.771 "read": true, 00:03:57.771 "write": true, 00:03:57.771 "unmap": true, 00:03:57.771 "write_zeroes": true, 00:03:57.771 "flush": true, 00:03:57.771 "reset": true, 00:03:57.771 "compare": false, 00:03:57.771 "compare_and_write": false, 00:03:57.771 "abort": true, 00:03:57.771 "nvme_admin": false, 00:03:57.771 "nvme_io": false 00:03:57.771 }, 00:03:57.771 "memory_domains": [ 00:03:57.771 { 00:03:57.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.771 "dma_device_type": 2 00:03:57.771 } 00:03:57.771 ], 00:03:57.771 "driver_specific": { 00:03:57.771 "passthru": { 00:03:57.771 "name": "Passthru0", 00:03:57.771 "base_bdev_name": "Malloc2" 00:03:57.771 } 00:03:57.771 } 00:03:57.771 } 00:03:57.771 ]' 00:03:57.771 14:04:59 -- rpc/rpc.sh@21 -- # jq length 00:03:57.771 14:04:59 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.771 14:04:59 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.771 14:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.771 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 14:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.771 14:04:59 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:57.771 14:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.771 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 14:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.771 14:04:59 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.771 14:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:57.771 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 14:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:57.771 14:04:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.771 14:04:59 -- rpc/rpc.sh@26 -- # jq length 00:03:57.771 ************************************ 00:03:57.771 END TEST rpc_daemon_integrity 00:03:57.771 ************************************ 00:03:57.771 14:04:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.771 00:03:57.771 real 0m0.225s 00:03:57.771 user 0m0.123s 00:03:57.771 sys 0m0.032s 00:03:57.771 14:04:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:57.771 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 14:04:59 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:57.771 14:04:59 -- rpc/rpc.sh@84 -- # killprocess 56171 00:03:57.771 14:04:59 -- common/autotest_common.sh@936 -- # '[' -z 56171 ']' 00:03:57.771 14:04:59 -- common/autotest_common.sh@940 -- # kill -0 56171 00:03:57.771 14:04:59 -- common/autotest_common.sh@941 -- # uname 00:03:57.771 14:04:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:57.771 14:04:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56171 00:03:57.771 killing process with pid 56171 00:03:57.771 14:04:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:57.771 14:04:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:57.771 14:04:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56171' 00:03:57.771 14:04:59 -- common/autotest_common.sh@955 -- # kill 56171 00:03:57.771 14:04:59 -- common/autotest_common.sh@960 -- # wait 56171 00:03:59.152 ************************************ 00:03:59.152 END TEST rpc 00:03:59.152 ************************************ 00:03:59.152 00:03:59.152 real 0m3.100s 00:03:59.152 user 0m3.508s 00:03:59.152 sys 0m0.564s 00:03:59.152 14:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.152 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.152 14:05:00 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:59.152 14:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.152 14:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.152 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.152 ************************************ 00:03:59.152 START TEST rpc_client 00:03:59.152 ************************************ 00:03:59.152 14:05:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:59.152 * Looking for test storage... 00:03:59.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:59.152 14:05:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.152 14:05:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.152 14:05:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.152 14:05:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.152 14:05:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.152 14:05:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.152 14:05:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.152 14:05:00 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.152 14:05:00 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.152 14:05:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.152 14:05:00 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.152 14:05:00 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.152 14:05:00 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.152 14:05:00 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.152 14:05:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.152 14:05:00 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.152 14:05:00 -- scripts/common.sh@344 -- # : 1 00:03:59.152 14:05:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.152 14:05:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.152 14:05:00 -- scripts/common.sh@364 -- # decimal 1 00:03:59.152 14:05:00 -- scripts/common.sh@352 -- # local d=1 00:03:59.152 14:05:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.152 14:05:00 -- scripts/common.sh@354 -- # echo 1 00:03:59.152 14:05:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.152 14:05:00 -- scripts/common.sh@365 -- # decimal 2 00:03:59.152 14:05:00 -- scripts/common.sh@352 -- # local d=2 00:03:59.152 14:05:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.152 14:05:00 -- scripts/common.sh@354 -- # echo 2 00:03:59.152 14:05:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.152 14:05:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.152 14:05:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.152 14:05:00 -- scripts/common.sh@367 -- # return 0 00:03:59.152 14:05:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.152 14:05:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.152 --rc genhtml_branch_coverage=1 00:03:59.152 --rc genhtml_function_coverage=1 00:03:59.152 --rc genhtml_legend=1 00:03:59.152 --rc geninfo_all_blocks=1 00:03:59.152 --rc geninfo_unexecuted_blocks=1 00:03:59.152 00:03:59.152 ' 00:03:59.152 14:05:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.152 --rc genhtml_branch_coverage=1 00:03:59.152 --rc genhtml_function_coverage=1 00:03:59.152 --rc genhtml_legend=1 00:03:59.152 --rc geninfo_all_blocks=1 00:03:59.152 --rc geninfo_unexecuted_blocks=1 00:03:59.152 00:03:59.152 ' 00:03:59.152 14:05:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.152 --rc genhtml_branch_coverage=1 00:03:59.152 --rc genhtml_function_coverage=1 00:03:59.152 --rc genhtml_legend=1 00:03:59.152 --rc geninfo_all_blocks=1 00:03:59.152 --rc geninfo_unexecuted_blocks=1 00:03:59.152 00:03:59.152 ' 00:03:59.152 14:05:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.152 --rc genhtml_branch_coverage=1 00:03:59.152 --rc genhtml_function_coverage=1 00:03:59.152 --rc genhtml_legend=1 00:03:59.152 --rc geninfo_all_blocks=1 00:03:59.152 --rc geninfo_unexecuted_blocks=1 00:03:59.152 00:03:59.152 ' 00:03:59.152 14:05:00 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:59.152 OK 00:03:59.152 14:05:00 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:59.152 00:03:59.152 real 0m0.188s 00:03:59.152 user 0m0.111s 00:03:59.152 sys 0m0.084s 00:03:59.152 14:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.152 ************************************ 00:03:59.152 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.152 END TEST rpc_client 00:03:59.152 ************************************ 00:03:59.414 14:05:00 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:59.414 14:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.414 14:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.414 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.414 ************************************ 00:03:59.414 START TEST json_config 00:03:59.414 ************************************ 00:03:59.414 14:05:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:59.414 14:05:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.414 14:05:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.414 14:05:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.414 14:05:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.414 14:05:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.414 14:05:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.414 14:05:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.414 14:05:00 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.414 14:05:00 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.414 14:05:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.414 14:05:00 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.414 14:05:00 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.414 14:05:00 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.414 14:05:00 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.414 14:05:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.414 14:05:00 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.414 14:05:00 -- scripts/common.sh@344 -- # : 1 00:03:59.414 14:05:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.414 14:05:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.414 14:05:00 -- scripts/common.sh@364 -- # decimal 1 00:03:59.414 14:05:00 -- scripts/common.sh@352 -- # local d=1 00:03:59.414 14:05:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.414 14:05:00 -- scripts/common.sh@354 -- # echo 1 00:03:59.414 14:05:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.414 14:05:00 -- scripts/common.sh@365 -- # decimal 2 00:03:59.414 14:05:00 -- scripts/common.sh@352 -- # local d=2 00:03:59.414 14:05:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.414 14:05:00 -- scripts/common.sh@354 -- # echo 2 00:03:59.414 14:05:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.414 14:05:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.414 14:05:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.414 14:05:00 -- scripts/common.sh@367 -- # return 0 00:03:59.414 14:05:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.414 14:05:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.414 --rc genhtml_branch_coverage=1 00:03:59.414 --rc genhtml_function_coverage=1 00:03:59.414 --rc genhtml_legend=1 00:03:59.414 --rc geninfo_all_blocks=1 00:03:59.414 --rc geninfo_unexecuted_blocks=1 00:03:59.414 00:03:59.414 ' 00:03:59.414 14:05:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.414 --rc genhtml_branch_coverage=1 00:03:59.414 --rc genhtml_function_coverage=1 00:03:59.414 --rc genhtml_legend=1 00:03:59.414 --rc geninfo_all_blocks=1 00:03:59.414 --rc geninfo_unexecuted_blocks=1 00:03:59.414 00:03:59.414 ' 00:03:59.414 14:05:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.414 --rc genhtml_branch_coverage=1 00:03:59.414 --rc genhtml_function_coverage=1 00:03:59.414 --rc genhtml_legend=1 00:03:59.414 --rc geninfo_all_blocks=1 00:03:59.414 --rc geninfo_unexecuted_blocks=1 00:03:59.414 00:03:59.414 ' 00:03:59.414 14:05:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.414 --rc genhtml_branch_coverage=1 00:03:59.414 --rc genhtml_function_coverage=1 00:03:59.414 --rc genhtml_legend=1 00:03:59.414 --rc geninfo_all_blocks=1 00:03:59.414 --rc geninfo_unexecuted_blocks=1 00:03:59.414 00:03:59.414 ' 00:03:59.414 14:05:00 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.414 14:05:00 -- nvmf/common.sh@7 -- # uname -s 00:03:59.414 14:05:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.414 14:05:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.414 14:05:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.414 14:05:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.414 14:05:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.414 14:05:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.414 14:05:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.414 14:05:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.414 14:05:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.414 14:05:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.414 14:05:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a56602d-4ef1-47a7-b8e8-8d1422718f64 00:03:59.414 14:05:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=9a56602d-4ef1-47a7-b8e8-8d1422718f64 00:03:59.414 14:05:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.414 14:05:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.414 14:05:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.414 14:05:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.414 14:05:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.414 14:05:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.414 14:05:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.414 14:05:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.414 14:05:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.415 14:05:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.415 14:05:00 -- paths/export.sh@5 -- # export PATH 00:03:59.415 14:05:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.415 14:05:00 -- nvmf/common.sh@46 -- # : 0 00:03:59.415 14:05:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:59.415 14:05:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:59.415 14:05:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:59.415 14:05:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.415 14:05:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.415 14:05:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:59.415 14:05:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:59.415 14:05:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:59.415 14:05:00 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:59.415 14:05:00 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:59.415 14:05:00 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:59.415 WARNING: No tests are enabled so not running JSON configuration tests 00:03:59.415 14:05:00 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:59.415 14:05:00 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:03:59.415 14:05:00 -- json_config/json_config.sh@27 -- # exit 0 00:03:59.415 00:03:59.415 real 0m0.147s 00:03:59.415 user 0m0.098s 00:03:59.415 sys 0m0.048s 00:03:59.415 14:05:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:59.415 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.415 ************************************ 00:03:59.415 END TEST json_config 00:03:59.415 ************************************ 00:03:59.415 14:05:00 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:59.415 14:05:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.415 14:05:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.415 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.415 ************************************ 00:03:59.415 START TEST json_config_extra_key 00:03:59.415 ************************************ 00:03:59.415 14:05:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:59.675 14:05:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:59.675 14:05:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:59.675 14:05:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.675 14:05:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.675 14:05:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.675 14:05:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.675 14:05:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.675 14:05:00 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.675 14:05:00 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.675 14:05:00 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.675 14:05:00 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.675 14:05:00 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.675 14:05:00 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.675 14:05:00 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.675 14:05:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.675 14:05:00 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.675 14:05:00 -- scripts/common.sh@344 -- # : 1 00:03:59.675 14:05:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.675 14:05:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.675 14:05:00 -- scripts/common.sh@364 -- # decimal 1 00:03:59.675 14:05:00 -- scripts/common.sh@352 -- # local d=1 00:03:59.675 14:05:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.675 14:05:00 -- scripts/common.sh@354 -- # echo 1 00:03:59.675 14:05:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.675 14:05:00 -- scripts/common.sh@365 -- # decimal 2 00:03:59.675 14:05:00 -- scripts/common.sh@352 -- # local d=2 00:03:59.675 14:05:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.675 14:05:00 -- scripts/common.sh@354 -- # echo 2 00:03:59.675 14:05:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.675 14:05:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.675 14:05:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.675 14:05:00 -- scripts/common.sh@367 -- # return 0 00:03:59.675 14:05:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.675 14:05:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.675 --rc genhtml_branch_coverage=1 00:03:59.675 --rc genhtml_function_coverage=1 00:03:59.675 --rc genhtml_legend=1 00:03:59.675 --rc geninfo_all_blocks=1 00:03:59.675 --rc geninfo_unexecuted_blocks=1 00:03:59.675 00:03:59.675 ' 00:03:59.675 14:05:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.675 --rc genhtml_branch_coverage=1 00:03:59.675 --rc genhtml_function_coverage=1 00:03:59.675 --rc genhtml_legend=1 00:03:59.675 --rc geninfo_all_blocks=1 00:03:59.675 --rc geninfo_unexecuted_blocks=1 00:03:59.675 00:03:59.675 ' 00:03:59.675 14:05:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.675 --rc genhtml_branch_coverage=1 00:03:59.675 --rc genhtml_function_coverage=1 00:03:59.675 --rc genhtml_legend=1 00:03:59.675 --rc geninfo_all_blocks=1 00:03:59.675 --rc geninfo_unexecuted_blocks=1 00:03:59.675 00:03:59.675 ' 00:03:59.675 14:05:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.675 --rc genhtml_branch_coverage=1 00:03:59.675 --rc genhtml_function_coverage=1 00:03:59.675 --rc genhtml_legend=1 00:03:59.675 --rc geninfo_all_blocks=1 00:03:59.675 --rc geninfo_unexecuted_blocks=1 00:03:59.675 00:03:59.675 ' 00:03:59.675 14:05:00 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.675 14:05:00 -- nvmf/common.sh@7 -- # uname -s 00:03:59.675 14:05:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.675 14:05:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.675 14:05:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.675 14:05:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.675 14:05:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.675 14:05:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.675 14:05:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.675 14:05:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.675 14:05:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.675 14:05:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.675 14:05:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9a56602d-4ef1-47a7-b8e8-8d1422718f64 00:03:59.675 14:05:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=9a56602d-4ef1-47a7-b8e8-8d1422718f64 00:03:59.675 14:05:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.675 14:05:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.675 14:05:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.675 14:05:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.675 14:05:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.675 14:05:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.675 14:05:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.675 14:05:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.675 14:05:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.675 14:05:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.675 14:05:01 -- paths/export.sh@5 -- # export PATH 00:03:59.675 14:05:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.675 14:05:01 -- nvmf/common.sh@46 -- # : 0 00:03:59.675 14:05:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:59.675 14:05:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:59.675 14:05:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:59.675 14:05:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.675 14:05:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.676 14:05:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:59.676 14:05:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:59.676 14:05:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:59.676 INFO: launching applications... 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56465 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:59.676 Waiting for target to run... 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:59.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:59.676 14:05:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56465 /var/tmp/spdk_tgt.sock 00:03:59.676 14:05:01 -- common/autotest_common.sh@829 -- # '[' -z 56465 ']' 00:03:59.676 14:05:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:59.676 14:05:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.676 14:05:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:59.676 14:05:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.676 14:05:01 -- common/autotest_common.sh@10 -- # set +x 00:03:59.676 [2024-12-04 14:05:01.084030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:59.676 [2024-12-04 14:05:01.084271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56465 ] 00:04:00.246 [2024-12-04 14:05:01.400260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.246 [2024-12-04 14:05:01.607311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:00.246 [2024-12-04 14:05:01.607691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.189 14:05:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.189 14:05:02 -- common/autotest_common.sh@862 -- # return 0 00:04:01.189 00:04:01.189 INFO: shutting down applications... 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56465 ]] 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56465 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:01.189 14:05:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:01.757 14:05:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:01.757 14:05:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:01.757 14:05:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:01.757 14:05:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:02.324 14:05:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:02.324 14:05:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:02.324 14:05:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:02.324 14:05:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:02.586 SPDK target shutdown done 00:04:02.586 Success 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56465 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:02.586 14:05:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:02.586 ************************************ 00:04:02.586 END TEST json_config_extra_key 00:04:02.586 ************************************ 00:04:02.586 00:04:02.586 real 0m3.163s 00:04:02.586 user 0m3.048s 00:04:02.586 sys 0m0.421s 00:04:02.586 14:05:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.586 14:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.847 14:05:04 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.847 14:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.847 14:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.847 14:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.847 ************************************ 00:04:02.847 START TEST alias_rpc 00:04:02.847 ************************************ 00:04:02.847 14:05:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.847 * Looking for test storage... 00:04:02.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:02.847 14:05:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.847 14:05:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.847 14:05:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.847 14:05:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.847 14:05:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.847 14:05:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.847 14:05:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.847 14:05:04 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.847 14:05:04 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.847 14:05:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.847 14:05:04 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.847 14:05:04 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.847 14:05:04 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.847 14:05:04 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.847 14:05:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.847 14:05:04 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.847 14:05:04 -- scripts/common.sh@344 -- # : 1 00:04:02.847 14:05:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.847 14:05:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.847 14:05:04 -- scripts/common.sh@364 -- # decimal 1 00:04:02.847 14:05:04 -- scripts/common.sh@352 -- # local d=1 00:04:02.847 14:05:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.847 14:05:04 -- scripts/common.sh@354 -- # echo 1 00:04:02.847 14:05:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.847 14:05:04 -- scripts/common.sh@365 -- # decimal 2 00:04:02.847 14:05:04 -- scripts/common.sh@352 -- # local d=2 00:04:02.847 14:05:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.847 14:05:04 -- scripts/common.sh@354 -- # echo 2 00:04:02.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.847 14:05:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.847 14:05:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.847 14:05:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.847 14:05:04 -- scripts/common.sh@367 -- # return 0 00:04:02.847 14:05:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.847 14:05:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.847 --rc genhtml_branch_coverage=1 00:04:02.847 --rc genhtml_function_coverage=1 00:04:02.847 --rc genhtml_legend=1 00:04:02.847 --rc geninfo_all_blocks=1 00:04:02.847 --rc geninfo_unexecuted_blocks=1 00:04:02.847 00:04:02.847 ' 00:04:02.847 14:05:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.847 --rc genhtml_branch_coverage=1 00:04:02.847 --rc genhtml_function_coverage=1 00:04:02.847 --rc genhtml_legend=1 00:04:02.847 --rc geninfo_all_blocks=1 00:04:02.847 --rc geninfo_unexecuted_blocks=1 00:04:02.847 00:04:02.847 ' 00:04:02.847 14:05:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.847 --rc genhtml_branch_coverage=1 00:04:02.847 --rc genhtml_function_coverage=1 00:04:02.847 --rc genhtml_legend=1 00:04:02.847 --rc geninfo_all_blocks=1 00:04:02.847 --rc geninfo_unexecuted_blocks=1 00:04:02.847 00:04:02.847 ' 00:04:02.847 14:05:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.847 --rc genhtml_branch_coverage=1 00:04:02.847 --rc genhtml_function_coverage=1 00:04:02.847 --rc genhtml_legend=1 00:04:02.847 --rc geninfo_all_blocks=1 00:04:02.847 --rc geninfo_unexecuted_blocks=1 00:04:02.847 00:04:02.847 ' 00:04:02.847 14:05:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:02.847 14:05:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56563 00:04:02.847 14:05:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56563 00:04:02.847 14:05:04 -- common/autotest_common.sh@829 -- # '[' -z 56563 ']' 00:04:02.847 14:05:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.847 14:05:04 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:02.847 14:05:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.847 14:05:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.847 14:05:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.847 14:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.847 [2024-12-04 14:05:04.293315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:02.847 [2024-12-04 14:05:04.293701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56563 ] 00:04:03.107 [2024-12-04 14:05:04.445226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.365 [2024-12-04 14:05:04.595629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.366 [2024-12-04 14:05:04.595927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.930 14:05:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.930 14:05:05 -- common/autotest_common.sh@862 -- # return 0 00:04:03.930 14:05:05 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:03.930 14:05:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56563 00:04:03.930 14:05:05 -- common/autotest_common.sh@936 -- # '[' -z 56563 ']' 00:04:03.930 14:05:05 -- common/autotest_common.sh@940 -- # kill -0 56563 00:04:03.930 14:05:05 -- common/autotest_common.sh@941 -- # uname 00:04:03.930 14:05:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:03.930 14:05:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56563 00:04:03.930 killing process with pid 56563 00:04:03.930 14:05:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:03.930 14:05:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:03.930 14:05:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56563' 00:04:03.930 14:05:05 -- common/autotest_common.sh@955 -- # kill 56563 00:04:03.930 14:05:05 -- common/autotest_common.sh@960 -- # wait 56563 00:04:05.307 ************************************ 00:04:05.307 END TEST alias_rpc 00:04:05.307 ************************************ 00:04:05.307 00:04:05.307 real 0m2.423s 00:04:05.307 user 0m2.500s 00:04:05.307 sys 0m0.403s 00:04:05.307 14:05:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.307 14:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.307 14:05:06 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:05.307 14:05:06 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:05.307 14:05:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.307 14:05:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.307 14:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.307 ************************************ 00:04:05.307 START TEST spdkcli_tcp 00:04:05.307 ************************************ 00:04:05.307 14:05:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:05.307 * Looking for test storage... 00:04:05.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:05.307 14:05:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:05.307 14:05:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:05.307 14:05:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:05.307 14:05:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:05.307 14:05:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:05.307 14:05:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:05.307 14:05:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:05.307 14:05:06 -- scripts/common.sh@335 -- # IFS=.-: 00:04:05.307 14:05:06 -- scripts/common.sh@335 -- # read -ra ver1 00:04:05.307 14:05:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.307 14:05:06 -- scripts/common.sh@336 -- # read -ra ver2 00:04:05.307 14:05:06 -- scripts/common.sh@337 -- # local 'op=<' 00:04:05.307 14:05:06 -- scripts/common.sh@339 -- # ver1_l=2 00:04:05.307 14:05:06 -- scripts/common.sh@340 -- # ver2_l=1 00:04:05.307 14:05:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:05.307 14:05:06 -- scripts/common.sh@343 -- # case "$op" in 00:04:05.307 14:05:06 -- scripts/common.sh@344 -- # : 1 00:04:05.307 14:05:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:05.307 14:05:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.307 14:05:06 -- scripts/common.sh@364 -- # decimal 1 00:04:05.307 14:05:06 -- scripts/common.sh@352 -- # local d=1 00:04:05.307 14:05:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.307 14:05:06 -- scripts/common.sh@354 -- # echo 1 00:04:05.307 14:05:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:05.307 14:05:06 -- scripts/common.sh@365 -- # decimal 2 00:04:05.307 14:05:06 -- scripts/common.sh@352 -- # local d=2 00:04:05.307 14:05:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.307 14:05:06 -- scripts/common.sh@354 -- # echo 2 00:04:05.307 14:05:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:05.307 14:05:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:05.307 14:05:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:05.308 14:05:06 -- scripts/common.sh@367 -- # return 0 00:04:05.308 14:05:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.308 14:05:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.308 --rc genhtml_branch_coverage=1 00:04:05.308 --rc genhtml_function_coverage=1 00:04:05.308 --rc genhtml_legend=1 00:04:05.308 --rc geninfo_all_blocks=1 00:04:05.308 --rc geninfo_unexecuted_blocks=1 00:04:05.308 00:04:05.308 ' 00:04:05.308 14:05:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.308 --rc genhtml_branch_coverage=1 00:04:05.308 --rc genhtml_function_coverage=1 00:04:05.308 --rc genhtml_legend=1 00:04:05.308 --rc geninfo_all_blocks=1 00:04:05.308 --rc geninfo_unexecuted_blocks=1 00:04:05.308 00:04:05.308 ' 00:04:05.308 14:05:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.308 --rc genhtml_branch_coverage=1 00:04:05.308 --rc genhtml_function_coverage=1 00:04:05.308 --rc genhtml_legend=1 00:04:05.308 --rc geninfo_all_blocks=1 00:04:05.308 --rc geninfo_unexecuted_blocks=1 00:04:05.308 00:04:05.308 ' 00:04:05.308 14:05:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:05.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.308 --rc genhtml_branch_coverage=1 00:04:05.308 --rc genhtml_function_coverage=1 00:04:05.308 --rc genhtml_legend=1 00:04:05.308 --rc geninfo_all_blocks=1 00:04:05.308 --rc geninfo_unexecuted_blocks=1 00:04:05.308 00:04:05.308 ' 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:05.308 14:05:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:05.308 14:05:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:05.308 14:05:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.308 14:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56653 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 56653 00:04:05.308 14:05:06 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:05.308 14:05:06 -- common/autotest_common.sh@829 -- # '[' -z 56653 ']' 00:04:05.308 14:05:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.308 14:05:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.308 14:05:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.308 14:05:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.308 14:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:05.568 [2024-12-04 14:05:06.770212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:05.568 [2024-12-04 14:05:06.770578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56653 ] 00:04:05.568 [2024-12-04 14:05:06.923650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:05.827 [2024-12-04 14:05:07.070823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:05.827 [2024-12-04 14:05:07.071310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.827 [2024-12-04 14:05:07.071415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.395 14:05:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.395 14:05:07 -- common/autotest_common.sh@862 -- # return 0 00:04:06.395 14:05:07 -- spdkcli/tcp.sh@31 -- # socat_pid=56670 00:04:06.395 14:05:07 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:06.395 14:05:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:06.395 [ 00:04:06.395 "bdev_malloc_delete", 00:04:06.395 "bdev_malloc_create", 00:04:06.395 "bdev_null_resize", 00:04:06.395 "bdev_null_delete", 00:04:06.395 "bdev_null_create", 00:04:06.395 "bdev_nvme_cuse_unregister", 00:04:06.395 "bdev_nvme_cuse_register", 00:04:06.395 "bdev_opal_new_user", 00:04:06.395 "bdev_opal_set_lock_state", 00:04:06.395 "bdev_opal_delete", 00:04:06.395 "bdev_opal_get_info", 00:04:06.395 "bdev_opal_create", 00:04:06.395 "bdev_nvme_opal_revert", 00:04:06.395 "bdev_nvme_opal_init", 00:04:06.395 "bdev_nvme_send_cmd", 00:04:06.395 "bdev_nvme_get_path_iostat", 00:04:06.395 "bdev_nvme_get_mdns_discovery_info", 00:04:06.395 "bdev_nvme_stop_mdns_discovery", 00:04:06.395 "bdev_nvme_start_mdns_discovery", 00:04:06.395 "bdev_nvme_set_multipath_policy", 00:04:06.395 "bdev_nvme_set_preferred_path", 00:04:06.395 "bdev_nvme_get_io_paths", 00:04:06.395 "bdev_nvme_remove_error_injection", 00:04:06.395 "bdev_nvme_add_error_injection", 00:04:06.395 "bdev_nvme_get_discovery_info", 00:04:06.395 "bdev_nvme_stop_discovery", 00:04:06.395 "bdev_nvme_start_discovery", 00:04:06.395 "bdev_nvme_get_controller_health_info", 00:04:06.395 "bdev_nvme_disable_controller", 00:04:06.395 "bdev_nvme_enable_controller", 00:04:06.395 "bdev_nvme_reset_controller", 00:04:06.395 "bdev_nvme_get_transport_statistics", 00:04:06.395 "bdev_nvme_apply_firmware", 00:04:06.395 "bdev_nvme_detach_controller", 00:04:06.395 "bdev_nvme_get_controllers", 00:04:06.395 "bdev_nvme_attach_controller", 00:04:06.395 "bdev_nvme_set_hotplug", 00:04:06.395 "bdev_nvme_set_options", 00:04:06.395 "bdev_passthru_delete", 00:04:06.395 "bdev_passthru_create", 00:04:06.395 "bdev_lvol_grow_lvstore", 00:04:06.395 "bdev_lvol_get_lvols", 00:04:06.395 "bdev_lvol_get_lvstores", 00:04:06.395 "bdev_lvol_delete", 00:04:06.395 "bdev_lvol_set_read_only", 00:04:06.395 "bdev_lvol_resize", 00:04:06.395 "bdev_lvol_decouple_parent", 00:04:06.395 "bdev_lvol_inflate", 00:04:06.395 "bdev_lvol_rename", 00:04:06.395 "bdev_lvol_clone_bdev", 00:04:06.395 "bdev_lvol_clone", 00:04:06.395 "bdev_lvol_snapshot", 00:04:06.395 "bdev_lvol_create", 00:04:06.395 "bdev_lvol_delete_lvstore", 00:04:06.395 "bdev_lvol_rename_lvstore", 00:04:06.395 "bdev_lvol_create_lvstore", 00:04:06.395 "bdev_raid_set_options", 00:04:06.395 "bdev_raid_remove_base_bdev", 00:04:06.395 "bdev_raid_add_base_bdev", 00:04:06.395 "bdev_raid_delete", 00:04:06.395 "bdev_raid_create", 00:04:06.395 "bdev_raid_get_bdevs", 00:04:06.396 "bdev_error_inject_error", 00:04:06.396 "bdev_error_delete", 00:04:06.396 "bdev_error_create", 00:04:06.396 "bdev_split_delete", 00:04:06.396 "bdev_split_create", 00:04:06.396 "bdev_delay_delete", 00:04:06.396 "bdev_delay_create", 00:04:06.396 "bdev_delay_update_latency", 00:04:06.396 "bdev_zone_block_delete", 00:04:06.396 "bdev_zone_block_create", 00:04:06.396 "blobfs_create", 00:04:06.396 "blobfs_detect", 00:04:06.396 "blobfs_set_cache_size", 00:04:06.396 "bdev_xnvme_delete", 00:04:06.396 "bdev_xnvme_create", 00:04:06.396 "bdev_aio_delete", 00:04:06.396 "bdev_aio_rescan", 00:04:06.396 "bdev_aio_create", 00:04:06.396 "bdev_ftl_set_property", 00:04:06.396 "bdev_ftl_get_properties", 00:04:06.396 "bdev_ftl_get_stats", 00:04:06.396 "bdev_ftl_unmap", 00:04:06.396 "bdev_ftl_unload", 00:04:06.396 "bdev_ftl_delete", 00:04:06.396 "bdev_ftl_load", 00:04:06.396 "bdev_ftl_create", 00:04:06.396 "bdev_virtio_attach_controller", 00:04:06.396 "bdev_virtio_scsi_get_devices", 00:04:06.396 "bdev_virtio_detach_controller", 00:04:06.396 "bdev_virtio_blk_set_hotplug", 00:04:06.396 "bdev_iscsi_delete", 00:04:06.396 "bdev_iscsi_create", 00:04:06.396 "bdev_iscsi_set_options", 00:04:06.396 "accel_error_inject_error", 00:04:06.396 "ioat_scan_accel_module", 00:04:06.396 "dsa_scan_accel_module", 00:04:06.396 "iaa_scan_accel_module", 00:04:06.396 "iscsi_set_options", 00:04:06.396 "iscsi_get_auth_groups", 00:04:06.396 "iscsi_auth_group_remove_secret", 00:04:06.396 "iscsi_auth_group_add_secret", 00:04:06.396 "iscsi_delete_auth_group", 00:04:06.396 "iscsi_create_auth_group", 00:04:06.396 "iscsi_set_discovery_auth", 00:04:06.396 "iscsi_get_options", 00:04:06.396 "iscsi_target_node_request_logout", 00:04:06.396 "iscsi_target_node_set_redirect", 00:04:06.396 "iscsi_target_node_set_auth", 00:04:06.396 "iscsi_target_node_add_lun", 00:04:06.396 "iscsi_get_connections", 00:04:06.396 "iscsi_portal_group_set_auth", 00:04:06.396 "iscsi_start_portal_group", 00:04:06.396 "iscsi_delete_portal_group", 00:04:06.396 "iscsi_create_portal_group", 00:04:06.396 "iscsi_get_portal_groups", 00:04:06.396 "iscsi_delete_target_node", 00:04:06.396 "iscsi_target_node_remove_pg_ig_maps", 00:04:06.396 "iscsi_target_node_add_pg_ig_maps", 00:04:06.396 "iscsi_create_target_node", 00:04:06.396 "iscsi_get_target_nodes", 00:04:06.396 "iscsi_delete_initiator_group", 00:04:06.396 "iscsi_initiator_group_remove_initiators", 00:04:06.396 "iscsi_initiator_group_add_initiators", 00:04:06.396 "iscsi_create_initiator_group", 00:04:06.396 "iscsi_get_initiator_groups", 00:04:06.396 "nvmf_set_crdt", 00:04:06.396 "nvmf_set_config", 00:04:06.396 "nvmf_set_max_subsystems", 00:04:06.396 "nvmf_subsystem_get_listeners", 00:04:06.396 "nvmf_subsystem_get_qpairs", 00:04:06.396 "nvmf_subsystem_get_controllers", 00:04:06.396 "nvmf_get_stats", 00:04:06.396 "nvmf_get_transports", 00:04:06.396 "nvmf_create_transport", 00:04:06.396 "nvmf_get_targets", 00:04:06.396 "nvmf_delete_target", 00:04:06.396 "nvmf_create_target", 00:04:06.396 "nvmf_subsystem_allow_any_host", 00:04:06.396 "nvmf_subsystem_remove_host", 00:04:06.396 "nvmf_subsystem_add_host", 00:04:06.396 "nvmf_subsystem_remove_ns", 00:04:06.396 "nvmf_subsystem_add_ns", 00:04:06.396 "nvmf_subsystem_listener_set_ana_state", 00:04:06.396 "nvmf_discovery_get_referrals", 00:04:06.396 "nvmf_discovery_remove_referral", 00:04:06.396 "nvmf_discovery_add_referral", 00:04:06.396 "nvmf_subsystem_remove_listener", 00:04:06.396 "nvmf_subsystem_add_listener", 00:04:06.396 "nvmf_delete_subsystem", 00:04:06.396 "nvmf_create_subsystem", 00:04:06.396 "nvmf_get_subsystems", 00:04:06.396 "env_dpdk_get_mem_stats", 00:04:06.396 "nbd_get_disks", 00:04:06.396 "nbd_stop_disk", 00:04:06.396 "nbd_start_disk", 00:04:06.396 "ublk_recover_disk", 00:04:06.396 "ublk_get_disks", 00:04:06.396 "ublk_stop_disk", 00:04:06.396 "ublk_start_disk", 00:04:06.396 "ublk_destroy_target", 00:04:06.396 "ublk_create_target", 00:04:06.396 "virtio_blk_create_transport", 00:04:06.396 "virtio_blk_get_transports", 00:04:06.396 "vhost_controller_set_coalescing", 00:04:06.396 "vhost_get_controllers", 00:04:06.396 "vhost_delete_controller", 00:04:06.396 "vhost_create_blk_controller", 00:04:06.396 "vhost_scsi_controller_remove_target", 00:04:06.396 "vhost_scsi_controller_add_target", 00:04:06.396 "vhost_start_scsi_controller", 00:04:06.396 "vhost_create_scsi_controller", 00:04:06.396 "thread_set_cpumask", 00:04:06.396 "framework_get_scheduler", 00:04:06.396 "framework_set_scheduler", 00:04:06.396 "framework_get_reactors", 00:04:06.396 "thread_get_io_channels", 00:04:06.396 "thread_get_pollers", 00:04:06.396 "thread_get_stats", 00:04:06.396 "framework_monitor_context_switch", 00:04:06.396 "spdk_kill_instance", 00:04:06.396 "log_enable_timestamps", 00:04:06.396 "log_get_flags", 00:04:06.396 "log_clear_flag", 00:04:06.396 "log_set_flag", 00:04:06.396 "log_get_level", 00:04:06.396 "log_set_level", 00:04:06.396 "log_get_print_level", 00:04:06.396 "log_set_print_level", 00:04:06.396 "framework_enable_cpumask_locks", 00:04:06.396 "framework_disable_cpumask_locks", 00:04:06.396 "framework_wait_init", 00:04:06.396 "framework_start_init", 00:04:06.396 "scsi_get_devices", 00:04:06.396 "bdev_get_histogram", 00:04:06.396 "bdev_enable_histogram", 00:04:06.396 "bdev_set_qos_limit", 00:04:06.396 "bdev_set_qd_sampling_period", 00:04:06.396 "bdev_get_bdevs", 00:04:06.396 "bdev_reset_iostat", 00:04:06.396 "bdev_get_iostat", 00:04:06.396 "bdev_examine", 00:04:06.396 "bdev_wait_for_examine", 00:04:06.396 "bdev_set_options", 00:04:06.396 "notify_get_notifications", 00:04:06.396 "notify_get_types", 00:04:06.396 "accel_get_stats", 00:04:06.396 "accel_set_options", 00:04:06.396 "accel_set_driver", 00:04:06.396 "accel_crypto_key_destroy", 00:04:06.396 "accel_crypto_keys_get", 00:04:06.396 "accel_crypto_key_create", 00:04:06.396 "accel_assign_opc", 00:04:06.396 "accel_get_module_info", 00:04:06.396 "accel_get_opc_assignments", 00:04:06.396 "vmd_rescan", 00:04:06.396 "vmd_remove_device", 00:04:06.396 "vmd_enable", 00:04:06.396 "sock_set_default_impl", 00:04:06.396 "sock_impl_set_options", 00:04:06.396 "sock_impl_get_options", 00:04:06.396 "iobuf_get_stats", 00:04:06.396 "iobuf_set_options", 00:04:06.396 "framework_get_pci_devices", 00:04:06.397 "framework_get_config", 00:04:06.397 "framework_get_subsystems", 00:04:06.397 "trace_get_info", 00:04:06.397 "trace_get_tpoint_group_mask", 00:04:06.397 "trace_disable_tpoint_group", 00:04:06.397 "trace_enable_tpoint_group", 00:04:06.397 "trace_clear_tpoint_mask", 00:04:06.397 "trace_set_tpoint_mask", 00:04:06.397 "spdk_get_version", 00:04:06.397 "rpc_get_methods" 00:04:06.397 ] 00:04:06.397 14:05:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:06.397 14:05:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:06.397 14:05:07 -- common/autotest_common.sh@10 -- # set +x 00:04:06.397 14:05:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:06.397 14:05:07 -- spdkcli/tcp.sh@38 -- # killprocess 56653 00:04:06.397 14:05:07 -- common/autotest_common.sh@936 -- # '[' -z 56653 ']' 00:04:06.397 14:05:07 -- common/autotest_common.sh@940 -- # kill -0 56653 00:04:06.397 14:05:07 -- common/autotest_common.sh@941 -- # uname 00:04:06.397 14:05:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:06.397 14:05:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56653 00:04:06.397 killing process with pid 56653 00:04:06.397 14:05:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:06.397 14:05:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:06.397 14:05:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56653' 00:04:06.397 14:05:07 -- common/autotest_common.sh@955 -- # kill 56653 00:04:06.397 14:05:07 -- common/autotest_common.sh@960 -- # wait 56653 00:04:07.798 ************************************ 00:04:07.798 END TEST spdkcli_tcp 00:04:07.798 ************************************ 00:04:07.798 00:04:07.798 real 0m2.452s 00:04:07.798 user 0m4.217s 00:04:07.798 sys 0m0.417s 00:04:07.798 14:05:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.798 14:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:07.798 14:05:09 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:07.798 14:05:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.798 14:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.798 14:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:07.798 ************************************ 00:04:07.798 START TEST dpdk_mem_utility 00:04:07.798 ************************************ 00:04:07.798 14:05:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:07.798 * Looking for test storage... 00:04:07.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:07.798 14:05:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.798 14:05:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.798 14:05:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.798 14:05:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.798 14:05:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.798 14:05:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.798 14:05:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.798 14:05:09 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.798 14:05:09 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.798 14:05:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.798 14:05:09 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.798 14:05:09 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.798 14:05:09 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.798 14:05:09 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.798 14:05:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.798 14:05:09 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.798 14:05:09 -- scripts/common.sh@344 -- # : 1 00:04:07.798 14:05:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.798 14:05:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.798 14:05:09 -- scripts/common.sh@364 -- # decimal 1 00:04:07.798 14:05:09 -- scripts/common.sh@352 -- # local d=1 00:04:07.798 14:05:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.798 14:05:09 -- scripts/common.sh@354 -- # echo 1 00:04:07.798 14:05:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.798 14:05:09 -- scripts/common.sh@365 -- # decimal 2 00:04:07.798 14:05:09 -- scripts/common.sh@352 -- # local d=2 00:04:07.798 14:05:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.798 14:05:09 -- scripts/common.sh@354 -- # echo 2 00:04:07.798 14:05:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.798 14:05:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.798 14:05:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.798 14:05:09 -- scripts/common.sh@367 -- # return 0 00:04:07.798 14:05:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.798 14:05:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.798 --rc genhtml_branch_coverage=1 00:04:07.798 --rc genhtml_function_coverage=1 00:04:07.798 --rc genhtml_legend=1 00:04:07.798 --rc geninfo_all_blocks=1 00:04:07.798 --rc geninfo_unexecuted_blocks=1 00:04:07.798 00:04:07.798 ' 00:04:07.798 14:05:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.798 --rc genhtml_branch_coverage=1 00:04:07.798 --rc genhtml_function_coverage=1 00:04:07.798 --rc genhtml_legend=1 00:04:07.798 --rc geninfo_all_blocks=1 00:04:07.798 --rc geninfo_unexecuted_blocks=1 00:04:07.798 00:04:07.798 ' 00:04:07.798 14:05:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.798 --rc genhtml_branch_coverage=1 00:04:07.798 --rc genhtml_function_coverage=1 00:04:07.798 --rc genhtml_legend=1 00:04:07.798 --rc geninfo_all_blocks=1 00:04:07.798 --rc geninfo_unexecuted_blocks=1 00:04:07.798 00:04:07.798 ' 00:04:07.798 14:05:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.798 --rc genhtml_branch_coverage=1 00:04:07.798 --rc genhtml_function_coverage=1 00:04:07.798 --rc genhtml_legend=1 00:04:07.798 --rc geninfo_all_blocks=1 00:04:07.798 --rc geninfo_unexecuted_blocks=1 00:04:07.798 00:04:07.798 ' 00:04:07.798 14:05:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:07.798 14:05:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56752 00:04:07.798 14:05:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56752 00:04:07.798 14:05:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:07.798 14:05:09 -- common/autotest_common.sh@829 -- # '[' -z 56752 ']' 00:04:07.798 14:05:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.798 14:05:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:07.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.798 14:05:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.798 14:05:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:07.798 14:05:09 -- common/autotest_common.sh@10 -- # set +x 00:04:07.798 [2024-12-04 14:05:09.256485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:07.798 [2024-12-04 14:05:09.256574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56752 ] 00:04:08.057 [2024-12-04 14:05:09.396655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.316 [2024-12-04 14:05:09.534865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:08.316 [2024-12-04 14:05:09.535013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.885 14:05:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.885 14:05:10 -- common/autotest_common.sh@862 -- # return 0 00:04:08.885 14:05:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:08.885 14:05:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:08.885 14:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.885 14:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:08.885 { 00:04:08.885 "filename": "/tmp/spdk_mem_dump.txt" 00:04:08.885 } 00:04:08.885 14:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.885 14:05:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:08.885 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:08.885 1 heaps totaling size 820.000000 MiB 00:04:08.885 size: 820.000000 MiB heap id: 0 00:04:08.885 end heaps---------- 00:04:08.885 8 mempools totaling size 598.116089 MiB 00:04:08.885 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:08.885 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:08.885 size: 84.521057 MiB name: bdev_io_56752 00:04:08.885 size: 51.011292 MiB name: evtpool_56752 00:04:08.885 size: 50.003479 MiB name: msgpool_56752 00:04:08.885 size: 21.763794 MiB name: PDU_Pool 00:04:08.885 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:08.885 size: 0.026123 MiB name: Session_Pool 00:04:08.885 end mempools------- 00:04:08.885 6 memzones totaling size 4.142822 MiB 00:04:08.885 size: 1.000366 MiB name: RG_ring_0_56752 00:04:08.885 size: 1.000366 MiB name: RG_ring_1_56752 00:04:08.885 size: 1.000366 MiB name: RG_ring_4_56752 00:04:08.885 size: 1.000366 MiB name: RG_ring_5_56752 00:04:08.885 size: 0.125366 MiB name: RG_ring_2_56752 00:04:08.885 size: 0.015991 MiB name: RG_ring_3_56752 00:04:08.885 end memzones------- 00:04:08.885 14:05:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:08.885 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:04:08.885 list of free elements. size: 18.451538 MiB 00:04:08.885 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:08.885 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:08.885 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:08.885 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:08.885 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:08.885 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:08.885 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:08.885 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:08.885 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:08.885 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:08.885 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:08.885 element at address: 0x200000200000 with size: 0.829224 MiB 00:04:08.885 element at address: 0x20001b000000 with size: 0.564880 MiB 00:04:08.885 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:08.885 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:08.885 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:08.885 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:08.885 element at address: 0x200003a00000 with size: 0.352234 MiB 00:04:08.885 list of standard malloc elements. size: 199.284058 MiB 00:04:08.885 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:08.885 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:08.885 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:08.885 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:08.885 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:08.885 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:08.885 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:08.885 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:08.885 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:08.885 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:08.885 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:08.885 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:08.885 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:08.886 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:08.887 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:08.887 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:08.887 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:08.887 list of memzone associated elements. size: 602.264404 MiB 00:04:08.887 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:08.887 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:08.887 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:08.887 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:08.887 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:08.887 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56752_0 00:04:08.887 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:08.887 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56752_0 00:04:08.887 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:08.887 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56752_0 00:04:08.887 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:08.887 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:08.887 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:08.887 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:08.887 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:08.887 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56752 00:04:08.887 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:08.887 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56752 00:04:08.887 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:08.887 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56752 00:04:08.887 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:08.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:08.887 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:08.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:08.887 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:08.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:08.887 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:08.887 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:08.887 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:08.887 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56752 00:04:08.887 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:08.887 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56752 00:04:08.887 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:08.887 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56752 00:04:08.887 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:08.887 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56752 00:04:08.887 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:08.887 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56752 00:04:08.888 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:08.888 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:08.888 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:08.888 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:08.888 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:08.888 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:08.888 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:08.888 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56752 00:04:08.888 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:08.888 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:08.888 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:08.888 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:08.888 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:08.888 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56752 00:04:08.888 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:08.888 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:08.888 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:08.888 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56752 00:04:08.888 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:08.888 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56752 00:04:08.888 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:08.888 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:08.888 14:05:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:08.888 14:05:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56752 00:04:08.888 14:05:10 -- common/autotest_common.sh@936 -- # '[' -z 56752 ']' 00:04:08.888 14:05:10 -- common/autotest_common.sh@940 -- # kill -0 56752 00:04:08.888 14:05:10 -- common/autotest_common.sh@941 -- # uname 00:04:08.888 14:05:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:08.888 14:05:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56752 00:04:08.888 14:05:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:08.888 killing process with pid 56752 00:04:08.888 14:05:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:08.888 14:05:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56752' 00:04:08.888 14:05:10 -- common/autotest_common.sh@955 -- # kill 56752 00:04:08.888 14:05:10 -- common/autotest_common.sh@960 -- # wait 56752 00:04:10.264 ************************************ 00:04:10.264 END TEST dpdk_mem_utility 00:04:10.264 ************************************ 00:04:10.264 00:04:10.264 real 0m2.305s 00:04:10.264 user 0m2.327s 00:04:10.264 sys 0m0.346s 00:04:10.264 14:05:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:10.264 14:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.264 14:05:11 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:10.264 14:05:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.264 14:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.264 ************************************ 00:04:10.264 START TEST event 00:04:10.264 ************************************ 00:04:10.264 14:05:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:10.264 * Looking for test storage... 00:04:10.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:10.264 14:05:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:10.264 14:05:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:10.264 14:05:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:10.264 14:05:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:10.264 14:05:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:10.264 14:05:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:10.264 14:05:11 -- scripts/common.sh@335 -- # IFS=.-: 00:04:10.264 14:05:11 -- scripts/common.sh@335 -- # read -ra ver1 00:04:10.264 14:05:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.264 14:05:11 -- scripts/common.sh@336 -- # read -ra ver2 00:04:10.264 14:05:11 -- scripts/common.sh@337 -- # local 'op=<' 00:04:10.264 14:05:11 -- scripts/common.sh@339 -- # ver1_l=2 00:04:10.264 14:05:11 -- scripts/common.sh@340 -- # ver2_l=1 00:04:10.264 14:05:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:10.264 14:05:11 -- scripts/common.sh@343 -- # case "$op" in 00:04:10.264 14:05:11 -- scripts/common.sh@344 -- # : 1 00:04:10.264 14:05:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:10.264 14:05:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.264 14:05:11 -- scripts/common.sh@364 -- # decimal 1 00:04:10.264 14:05:11 -- scripts/common.sh@352 -- # local d=1 00:04:10.264 14:05:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.264 14:05:11 -- scripts/common.sh@354 -- # echo 1 00:04:10.264 14:05:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:10.264 14:05:11 -- scripts/common.sh@365 -- # decimal 2 00:04:10.264 14:05:11 -- scripts/common.sh@352 -- # local d=2 00:04:10.264 14:05:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.264 14:05:11 -- scripts/common.sh@354 -- # echo 2 00:04:10.264 14:05:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:10.264 14:05:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:10.264 14:05:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:10.264 14:05:11 -- scripts/common.sh@367 -- # return 0 00:04:10.264 14:05:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.264 --rc genhtml_branch_coverage=1 00:04:10.264 --rc genhtml_function_coverage=1 00:04:10.264 --rc genhtml_legend=1 00:04:10.264 --rc geninfo_all_blocks=1 00:04:10.264 --rc geninfo_unexecuted_blocks=1 00:04:10.264 00:04:10.264 ' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.264 --rc genhtml_branch_coverage=1 00:04:10.264 --rc genhtml_function_coverage=1 00:04:10.264 --rc genhtml_legend=1 00:04:10.264 --rc geninfo_all_blocks=1 00:04:10.264 --rc geninfo_unexecuted_blocks=1 00:04:10.264 00:04:10.264 ' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.264 --rc genhtml_branch_coverage=1 00:04:10.264 --rc genhtml_function_coverage=1 00:04:10.264 --rc genhtml_legend=1 00:04:10.264 --rc geninfo_all_blocks=1 00:04:10.264 --rc geninfo_unexecuted_blocks=1 00:04:10.264 00:04:10.264 ' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:10.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.264 --rc genhtml_branch_coverage=1 00:04:10.264 --rc genhtml_function_coverage=1 00:04:10.264 --rc genhtml_legend=1 00:04:10.264 --rc geninfo_all_blocks=1 00:04:10.264 --rc geninfo_unexecuted_blocks=1 00:04:10.264 00:04:10.264 ' 00:04:10.264 14:05:11 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:10.264 14:05:11 -- bdev/nbd_common.sh@6 -- # set -e 00:04:10.264 14:05:11 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.264 14:05:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:10.264 14:05:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.264 14:05:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.264 ************************************ 00:04:10.264 START TEST event_perf 00:04:10.264 ************************************ 00:04:10.264 14:05:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.264 Running I/O for 1 seconds...[2024-12-04 14:05:11.587850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:10.264 [2024-12-04 14:05:11.587963] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56848 ] 00:04:10.523 [2024-12-04 14:05:11.738788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.523 [2024-12-04 14:05:11.891252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.523 [2024-12-04 14:05:11.891468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.523 [2024-12-04 14:05:11.892001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.523 Running I/O for 1 seconds...[2024-12-04 14:05:11.892016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.895 00:04:11.895 lcore 0: 200686 00:04:11.895 lcore 1: 200689 00:04:11.895 lcore 2: 200692 00:04:11.895 lcore 3: 200692 00:04:11.895 done. 00:04:11.895 00:04:11.895 real 0m1.541s 00:04:11.895 user 0m4.339s 00:04:11.895 sys 0m0.086s 00:04:11.895 ************************************ 00:04:11.895 END TEST event_perf 00:04:11.896 ************************************ 00:04:11.896 14:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.896 14:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.896 14:05:13 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:11.896 14:05:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:11.896 14:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.896 14:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:11.896 ************************************ 00:04:11.896 START TEST event_reactor 00:04:11.896 ************************************ 00:04:11.896 14:05:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:11.896 [2024-12-04 14:05:13.177785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:11.896 [2024-12-04 14:05:13.177882] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56882 ] 00:04:11.896 [2024-12-04 14:05:13.326428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.152 [2024-12-04 14:05:13.472383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.522 test_start 00:04:13.522 oneshot 00:04:13.522 tick 100 00:04:13.522 tick 100 00:04:13.522 tick 250 00:04:13.522 tick 100 00:04:13.523 tick 100 00:04:13.523 tick 100 00:04:13.523 tick 250 00:04:13.523 tick 500 00:04:13.523 tick 100 00:04:13.523 tick 100 00:04:13.523 tick 250 00:04:13.523 tick 100 00:04:13.523 tick 100 00:04:13.523 test_end 00:04:13.523 00:04:13.523 real 0m1.519s 00:04:13.523 user 0m1.340s 00:04:13.523 sys 0m0.071s 00:04:13.523 14:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.523 ************************************ 00:04:13.523 END TEST event_reactor 00:04:13.523 ************************************ 00:04:13.523 14:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.523 14:05:14 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.523 14:05:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:13.523 14:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.523 14:05:14 -- common/autotest_common.sh@10 -- # set +x 00:04:13.523 ************************************ 00:04:13.523 START TEST event_reactor_perf 00:04:13.523 ************************************ 00:04:13.523 14:05:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.523 [2024-12-04 14:05:14.746300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:13.523 [2024-12-04 14:05:14.746401] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56924 ] 00:04:13.523 [2024-12-04 14:05:14.894207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.791 [2024-12-04 14:05:15.035279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.165 test_start 00:04:15.165 test_end 00:04:15.165 Performance: 410105 events per second 00:04:15.165 00:04:15.165 real 0m1.524s 00:04:15.165 user 0m1.334s 00:04:15.165 sys 0m0.081s 00:04:15.165 14:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.165 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.165 ************************************ 00:04:15.165 END TEST event_reactor_perf 00:04:15.165 ************************************ 00:04:15.165 14:05:16 -- event/event.sh@49 -- # uname -s 00:04:15.165 14:05:16 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:15.165 14:05:16 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:15.165 14:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.165 14:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.165 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.165 ************************************ 00:04:15.165 START TEST event_scheduler 00:04:15.165 ************************************ 00:04:15.165 14:05:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:15.165 * Looking for test storage... 00:04:15.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:15.165 14:05:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:15.165 14:05:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:15.165 14:05:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:15.165 14:05:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:15.165 14:05:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:15.165 14:05:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:15.165 14:05:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:15.165 14:05:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:15.165 14:05:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:15.165 14:05:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.165 14:05:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:15.165 14:05:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:15.165 14:05:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:15.165 14:05:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:15.165 14:05:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:15.165 14:05:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:15.165 14:05:16 -- scripts/common.sh@344 -- # : 1 00:04:15.165 14:05:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:15.165 14:05:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.165 14:05:16 -- scripts/common.sh@364 -- # decimal 1 00:04:15.165 14:05:16 -- scripts/common.sh@352 -- # local d=1 00:04:15.165 14:05:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.165 14:05:16 -- scripts/common.sh@354 -- # echo 1 00:04:15.165 14:05:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:15.165 14:05:16 -- scripts/common.sh@365 -- # decimal 2 00:04:15.165 14:05:16 -- scripts/common.sh@352 -- # local d=2 00:04:15.165 14:05:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.165 14:05:16 -- scripts/common.sh@354 -- # echo 2 00:04:15.165 14:05:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:15.165 14:05:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:15.165 14:05:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:15.165 14:05:16 -- scripts/common.sh@367 -- # return 0 00:04:15.165 14:05:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.165 14:05:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.165 --rc genhtml_branch_coverage=1 00:04:15.165 --rc genhtml_function_coverage=1 00:04:15.165 --rc genhtml_legend=1 00:04:15.165 --rc geninfo_all_blocks=1 00:04:15.165 --rc geninfo_unexecuted_blocks=1 00:04:15.165 00:04:15.165 ' 00:04:15.165 14:05:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.165 --rc genhtml_branch_coverage=1 00:04:15.165 --rc genhtml_function_coverage=1 00:04:15.165 --rc genhtml_legend=1 00:04:15.165 --rc geninfo_all_blocks=1 00:04:15.165 --rc geninfo_unexecuted_blocks=1 00:04:15.165 00:04:15.165 ' 00:04:15.165 14:05:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.165 --rc genhtml_branch_coverage=1 00:04:15.165 --rc genhtml_function_coverage=1 00:04:15.165 --rc genhtml_legend=1 00:04:15.165 --rc geninfo_all_blocks=1 00:04:15.165 --rc geninfo_unexecuted_blocks=1 00:04:15.165 00:04:15.165 ' 00:04:15.165 14:05:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.165 --rc genhtml_branch_coverage=1 00:04:15.165 --rc genhtml_function_coverage=1 00:04:15.165 --rc genhtml_legend=1 00:04:15.165 --rc geninfo_all_blocks=1 00:04:15.165 --rc geninfo_unexecuted_blocks=1 00:04:15.165 00:04:15.165 ' 00:04:15.165 14:05:16 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:15.165 14:05:16 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56988 00:04:15.165 14:05:16 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.165 14:05:16 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:15.165 14:05:16 -- scheduler/scheduler.sh@37 -- # waitforlisten 56988 00:04:15.165 14:05:16 -- common/autotest_common.sh@829 -- # '[' -z 56988 ']' 00:04:15.165 14:05:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.165 14:05:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.165 14:05:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.165 14:05:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.165 14:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.165 [2024-12-04 14:05:16.497834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:15.165 [2024-12-04 14:05:16.497948] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56988 ] 00:04:15.424 [2024-12-04 14:05:16.647324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.424 [2024-12-04 14:05:16.848299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.424 [2024-12-04 14:05:16.848686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.424 [2024-12-04 14:05:16.848892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.424 [2024-12-04 14:05:16.849041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.992 14:05:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.993 14:05:17 -- common/autotest_common.sh@862 -- # return 0 00:04:15.993 14:05:17 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.993 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.993 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:15.993 POWER: Env isn't set yet! 00:04:15.993 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:15.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.993 POWER: Cannot set governor of lcore 0 to userspace 00:04:15.993 POWER: Attempting to initialise PSTAT power management... 00:04:15.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.993 POWER: Cannot set governor of lcore 0 to performance 00:04:15.993 POWER: Attempting to initialise AMD PSTATE power management... 00:04:15.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.993 POWER: Cannot set governor of lcore 0 to userspace 00:04:15.993 POWER: Attempting to initialise CPPC power management... 00:04:15.993 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:15.993 POWER: Cannot set governor of lcore 0 to userspace 00:04:15.993 POWER: Attempting to initialise VM power management... 00:04:15.993 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:15.993 POWER: Unable to set Power Management Environment for lcore 0 00:04:15.993 [2024-12-04 14:05:17.274854] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:15.993 [2024-12-04 14:05:17.274880] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:15.993 [2024-12-04 14:05:17.274935] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.993 [2024-12-04 14:05:17.274990] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.993 [2024-12-04 14:05:17.275020] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.993 [2024-12-04 14:05:17.275072] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.993 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.993 14:05:17 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.993 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.993 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.255 [2024-12-04 14:05:17.511871] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:16.255 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.255 14:05:17 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:16.255 14:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.255 14:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.255 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.255 ************************************ 00:04:16.255 START TEST scheduler_create_thread 00:04:16.255 ************************************ 00:04:16.255 14:05:17 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:16.255 14:05:17 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:16.255 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.255 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.255 2 00:04:16.255 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 3 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 4 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 5 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 6 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 7 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 8 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 9 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 10 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:16.256 14:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.256 14:05:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.256 14:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.256 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 14:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.643 00:04:17.643 real 0m1.171s 00:04:17.643 user 0m0.012s 00:04:17.643 sys 0m0.006s 00:04:17.643 14:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.643 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 ************************************ 00:04:17.643 END TEST scheduler_create_thread 00:04:17.643 ************************************ 00:04:17.643 14:05:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:17.643 14:05:18 -- scheduler/scheduler.sh@46 -- # killprocess 56988 00:04:17.643 14:05:18 -- common/autotest_common.sh@936 -- # '[' -z 56988 ']' 00:04:17.643 14:05:18 -- common/autotest_common.sh@940 -- # kill -0 56988 00:04:17.643 14:05:18 -- common/autotest_common.sh@941 -- # uname 00:04:17.643 14:05:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:17.643 14:05:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56988 00:04:17.643 14:05:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:17.643 14:05:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:17.643 killing process with pid 56988 00:04:17.643 14:05:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56988' 00:04:17.643 14:05:18 -- common/autotest_common.sh@955 -- # kill 56988 00:04:17.643 14:05:18 -- common/autotest_common.sh@960 -- # wait 56988 00:04:17.902 [2024-12-04 14:05:19.177576] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.469 00:04:18.469 real 0m3.498s 00:04:18.469 user 0m5.130s 00:04:18.469 sys 0m0.362s 00:04:18.469 ************************************ 00:04:18.469 END TEST event_scheduler 00:04:18.469 ************************************ 00:04:18.469 14:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.469 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:18.469 14:05:19 -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.469 14:05:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.469 14:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.469 14:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.469 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:18.469 ************************************ 00:04:18.469 START TEST app_repeat 00:04:18.469 ************************************ 00:04:18.469 14:05:19 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:18.469 14:05:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.469 14:05:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.469 14:05:19 -- event/event.sh@13 -- # local nbd_list 00:04:18.469 14:05:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.469 14:05:19 -- event/event.sh@14 -- # local bdev_list 00:04:18.469 14:05:19 -- event/event.sh@15 -- # local repeat_times=4 00:04:18.469 14:05:19 -- event/event.sh@17 -- # modprobe nbd 00:04:18.469 14:05:19 -- event/event.sh@19 -- # repeat_pid=57083 00:04:18.469 Process app_repeat pid: 57083 00:04:18.469 14:05:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.469 14:05:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57083' 00:04:18.469 14:05:19 -- event/event.sh@23 -- # for i in {0..2} 00:04:18.469 spdk_app_start Round 0 00:04:18.469 14:05:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.469 14:05:19 -- event/event.sh@25 -- # waitforlisten 57083 /var/tmp/spdk-nbd.sock 00:04:18.469 14:05:19 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.469 14:05:19 -- common/autotest_common.sh@829 -- # '[' -z 57083 ']' 00:04:18.469 14:05:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.469 14:05:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.469 14:05:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.469 14:05:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.469 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:04:18.469 [2024-12-04 14:05:19.890041] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:18.469 [2024-12-04 14:05:19.890159] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57083 ] 00:04:18.728 [2024-12-04 14:05:20.035682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.728 [2024-12-04 14:05:20.173360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.728 [2024-12-04 14:05:20.173436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.294 14:05:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.294 14:05:20 -- common/autotest_common.sh@862 -- # return 0 00:04:19.294 14:05:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.553 Malloc0 00:04:19.553 14:05:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.812 Malloc1 00:04:19.812 14:05:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@12 -- # local i 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.812 14:05:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.070 /dev/nbd0 00:04:20.070 14:05:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.070 14:05:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.070 14:05:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:20.070 14:05:21 -- common/autotest_common.sh@867 -- # local i 00:04:20.070 14:05:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:20.070 14:05:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:20.070 14:05:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:20.070 14:05:21 -- common/autotest_common.sh@871 -- # break 00:04:20.070 14:05:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:20.070 14:05:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:20.070 14:05:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.070 1+0 records in 00:04:20.070 1+0 records out 00:04:20.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220904 s, 18.5 MB/s 00:04:20.070 14:05:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:20.070 14:05:21 -- common/autotest_common.sh@884 -- # size=4096 00:04:20.070 14:05:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:20.070 14:05:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:20.070 14:05:21 -- common/autotest_common.sh@887 -- # return 0 00:04:20.070 14:05:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.070 14:05:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.070 14:05:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.070 /dev/nbd1 00:04:20.329 14:05:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.329 14:05:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.329 14:05:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:20.329 14:05:21 -- common/autotest_common.sh@867 -- # local i 00:04:20.329 14:05:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:20.329 14:05:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:20.329 14:05:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:20.329 14:05:21 -- common/autotest_common.sh@871 -- # break 00:04:20.329 14:05:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:20.329 14:05:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:20.329 14:05:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.329 1+0 records in 00:04:20.329 1+0 records out 00:04:20.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025333 s, 16.2 MB/s 00:04:20.329 14:05:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:20.330 14:05:21 -- common/autotest_common.sh@884 -- # size=4096 00:04:20.330 14:05:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:20.330 14:05:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:20.330 14:05:21 -- common/autotest_common.sh@887 -- # return 0 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.330 { 00:04:20.330 "nbd_device": "/dev/nbd0", 00:04:20.330 "bdev_name": "Malloc0" 00:04:20.330 }, 00:04:20.330 { 00:04:20.330 "nbd_device": "/dev/nbd1", 00:04:20.330 "bdev_name": "Malloc1" 00:04:20.330 } 00:04:20.330 ]' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.330 { 00:04:20.330 "nbd_device": "/dev/nbd0", 00:04:20.330 "bdev_name": "Malloc0" 00:04:20.330 }, 00:04:20.330 { 00:04:20.330 "nbd_device": "/dev/nbd1", 00:04:20.330 "bdev_name": "Malloc1" 00:04:20.330 } 00:04:20.330 ]' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.330 /dev/nbd1' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.330 /dev/nbd1' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.330 256+0 records in 00:04:20.330 256+0 records out 00:04:20.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974738 s, 108 MB/s 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.330 14:05:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.589 256+0 records in 00:04:20.589 256+0 records out 00:04:20.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153969 s, 68.1 MB/s 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.589 256+0 records in 00:04:20.589 256+0 records out 00:04:20.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181792 s, 57.7 MB/s 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@51 -- # local i 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.589 14:05:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@41 -- # break 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.589 14:05:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@41 -- # break 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.848 14:05:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@65 -- # true 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@65 -- # count=0 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@104 -- # count=0 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:21.107 14:05:22 -- bdev/nbd_common.sh@109 -- # return 0 00:04:21.107 14:05:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.366 14:05:22 -- event/event.sh@35 -- # sleep 3 00:04:21.934 [2024-12-04 14:05:23.339471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.194 [2024-12-04 14:05:23.464208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.194 [2024-12-04 14:05:23.464209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.194 [2024-12-04 14:05:23.567390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:22.194 [2024-12-04 14:05:23.567433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.725 14:05:25 -- event/event.sh@23 -- # for i in {0..2} 00:04:24.725 spdk_app_start Round 1 00:04:24.725 14:05:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.725 14:05:25 -- event/event.sh@25 -- # waitforlisten 57083 /var/tmp/spdk-nbd.sock 00:04:24.725 14:05:25 -- common/autotest_common.sh@829 -- # '[' -z 57083 ']' 00:04:24.725 14:05:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.725 14:05:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.725 14:05:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.725 14:05:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.725 14:05:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 14:05:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.725 14:05:25 -- common/autotest_common.sh@862 -- # return 0 00:04:24.725 14:05:25 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.725 Malloc0 00:04:24.725 14:05:26 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.984 Malloc1 00:04:24.984 14:05:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@12 -- # local i 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.984 14:05:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.243 /dev/nbd0 00:04:25.243 14:05:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.243 14:05:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.243 14:05:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:25.243 14:05:26 -- common/autotest_common.sh@867 -- # local i 00:04:25.243 14:05:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.243 14:05:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.243 14:05:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:25.243 14:05:26 -- common/autotest_common.sh@871 -- # break 00:04:25.243 14:05:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.243 14:05:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.243 14:05:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.243 1+0 records in 00:04:25.243 1+0 records out 00:04:25.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148903 s, 27.5 MB/s 00:04:25.243 14:05:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.243 14:05:26 -- common/autotest_common.sh@884 -- # size=4096 00:04:25.243 14:05:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.243 14:05:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.243 14:05:26 -- common/autotest_common.sh@887 -- # return 0 00:04:25.243 14:05:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.243 14:05:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.243 14:05:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.243 /dev/nbd1 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.502 14:05:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:25.502 14:05:26 -- common/autotest_common.sh@867 -- # local i 00:04:25.502 14:05:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.502 14:05:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.502 14:05:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:25.502 14:05:26 -- common/autotest_common.sh@871 -- # break 00:04:25.502 14:05:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.502 14:05:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.502 14:05:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.502 1+0 records in 00:04:25.502 1+0 records out 00:04:25.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017956 s, 22.8 MB/s 00:04:25.502 14:05:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.502 14:05:26 -- common/autotest_common.sh@884 -- # size=4096 00:04:25.502 14:05:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:25.502 14:05:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.502 14:05:26 -- common/autotest_common.sh@887 -- # return 0 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.502 { 00:04:25.502 "nbd_device": "/dev/nbd0", 00:04:25.502 "bdev_name": "Malloc0" 00:04:25.502 }, 00:04:25.502 { 00:04:25.502 "nbd_device": "/dev/nbd1", 00:04:25.502 "bdev_name": "Malloc1" 00:04:25.502 } 00:04:25.502 ]' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.502 { 00:04:25.502 "nbd_device": "/dev/nbd0", 00:04:25.502 "bdev_name": "Malloc0" 00:04:25.502 }, 00:04:25.502 { 00:04:25.502 "nbd_device": "/dev/nbd1", 00:04:25.502 "bdev_name": "Malloc1" 00:04:25.502 } 00:04:25.502 ]' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.502 /dev/nbd1' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.502 /dev/nbd1' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.502 256+0 records in 00:04:25.502 256+0 records out 00:04:25.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00807847 s, 130 MB/s 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.502 14:05:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.761 256+0 records in 00:04:25.761 256+0 records out 00:04:25.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110688 s, 94.7 MB/s 00:04:25.761 14:05:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.761 14:05:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.761 256+0 records in 00:04:25.761 256+0 records out 00:04:25.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216065 s, 48.5 MB/s 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@51 -- # local i 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@41 -- # break 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.761 14:05:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@41 -- # break 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.020 14:05:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@65 -- # true 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.281 14:05:27 -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.281 14:05:27 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.538 14:05:27 -- event/event.sh@35 -- # sleep 3 00:04:27.104 [2024-12-04 14:05:28.529677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.363 [2024-12-04 14:05:28.655023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.363 [2024-12-04 14:05:28.655118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.363 [2024-12-04 14:05:28.758044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.363 [2024-12-04 14:05:28.758084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.892 14:05:30 -- event/event.sh@23 -- # for i in {0..2} 00:04:29.892 spdk_app_start Round 2 00:04:29.892 14:05:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:29.892 14:05:30 -- event/event.sh@25 -- # waitforlisten 57083 /var/tmp/spdk-nbd.sock 00:04:29.892 14:05:30 -- common/autotest_common.sh@829 -- # '[' -z 57083 ']' 00:04:29.892 14:05:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.892 14:05:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.892 14:05:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.892 14:05:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.892 14:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:29.892 14:05:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.892 14:05:31 -- common/autotest_common.sh@862 -- # return 0 00:04:29.892 14:05:31 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.892 Malloc0 00:04:29.892 14:05:31 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.151 Malloc1 00:04:30.151 14:05:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@12 -- # local i 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.151 14:05:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:30.410 /dev/nbd0 00:04:30.410 14:05:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:30.410 14:05:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:30.410 14:05:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:30.410 14:05:31 -- common/autotest_common.sh@867 -- # local i 00:04:30.410 14:05:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:30.410 14:05:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:30.410 14:05:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:30.410 14:05:31 -- common/autotest_common.sh@871 -- # break 00:04:30.410 14:05:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:30.410 14:05:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:30.410 14:05:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.410 1+0 records in 00:04:30.410 1+0 records out 00:04:30.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160944 s, 25.4 MB/s 00:04:30.410 14:05:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.410 14:05:31 -- common/autotest_common.sh@884 -- # size=4096 00:04:30.410 14:05:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.410 14:05:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:30.410 14:05:31 -- common/autotest_common.sh@887 -- # return 0 00:04:30.410 14:05:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.410 14:05:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.410 14:05:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.669 /dev/nbd1 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.669 14:05:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:30.669 14:05:31 -- common/autotest_common.sh@867 -- # local i 00:04:30.669 14:05:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:30.669 14:05:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:30.669 14:05:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:30.669 14:05:31 -- common/autotest_common.sh@871 -- # break 00:04:30.669 14:05:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:30.669 14:05:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:30.669 14:05:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.669 1+0 records in 00:04:30.669 1+0 records out 00:04:30.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171502 s, 23.9 MB/s 00:04:30.669 14:05:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.669 14:05:31 -- common/autotest_common.sh@884 -- # size=4096 00:04:30.669 14:05:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:30.669 14:05:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:30.669 14:05:31 -- common/autotest_common.sh@887 -- # return 0 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.669 14:05:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.669 14:05:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.669 { 00:04:30.669 "nbd_device": "/dev/nbd0", 00:04:30.669 "bdev_name": "Malloc0" 00:04:30.669 }, 00:04:30.669 { 00:04:30.669 "nbd_device": "/dev/nbd1", 00:04:30.669 "bdev_name": "Malloc1" 00:04:30.669 } 00:04:30.669 ]' 00:04:30.669 14:05:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.669 { 00:04:30.669 "nbd_device": "/dev/nbd0", 00:04:30.669 "bdev_name": "Malloc0" 00:04:30.669 }, 00:04:30.669 { 00:04:30.669 "nbd_device": "/dev/nbd1", 00:04:30.669 "bdev_name": "Malloc1" 00:04:30.669 } 00:04:30.669 ]' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.928 /dev/nbd1' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.928 /dev/nbd1' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.928 256+0 records in 00:04:30.928 256+0 records out 00:04:30.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00978317 s, 107 MB/s 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.928 256+0 records in 00:04:30.928 256+0 records out 00:04:30.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124761 s, 84.0 MB/s 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.928 256+0 records in 00:04:30.928 256+0 records out 00:04:30.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152687 s, 68.7 MB/s 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@51 -- # local i 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.928 14:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@41 -- # break 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@41 -- # break 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.187 14:05:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@65 -- # true 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.446 14:05:32 -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.446 14:05:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.704 14:05:33 -- event/event.sh@35 -- # sleep 3 00:04:32.640 [2024-12-04 14:05:33.736541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.640 [2024-12-04 14:05:33.864342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.640 [2024-12-04 14:05:33.864343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.640 [2024-12-04 14:05:33.967682] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.640 [2024-12-04 14:05:33.967721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.167 14:05:36 -- event/event.sh@38 -- # waitforlisten 57083 /var/tmp/spdk-nbd.sock 00:04:35.167 14:05:36 -- common/autotest_common.sh@829 -- # '[' -z 57083 ']' 00:04:35.167 14:05:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.167 14:05:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.167 14:05:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.167 14:05:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.167 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:35.167 14:05:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.167 14:05:36 -- common/autotest_common.sh@862 -- # return 0 00:04:35.167 14:05:36 -- event/event.sh@39 -- # killprocess 57083 00:04:35.167 14:05:36 -- common/autotest_common.sh@936 -- # '[' -z 57083 ']' 00:04:35.167 14:05:36 -- common/autotest_common.sh@940 -- # kill -0 57083 00:04:35.167 14:05:36 -- common/autotest_common.sh@941 -- # uname 00:04:35.167 14:05:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.167 14:05:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57083 00:04:35.167 killing process with pid 57083 00:04:35.167 14:05:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:35.167 14:05:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:35.167 14:05:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57083' 00:04:35.167 14:05:36 -- common/autotest_common.sh@955 -- # kill 57083 00:04:35.168 14:05:36 -- common/autotest_common.sh@960 -- # wait 57083 00:04:35.735 spdk_app_start is called in Round 0. 00:04:35.735 Shutdown signal received, stop current app iteration 00:04:35.735 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:35.735 spdk_app_start is called in Round 1. 00:04:35.735 Shutdown signal received, stop current app iteration 00:04:35.735 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:35.735 spdk_app_start is called in Round 2. 00:04:35.735 Shutdown signal received, stop current app iteration 00:04:35.735 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:35.735 spdk_app_start is called in Round 3. 00:04:35.735 Shutdown signal received, stop current app iteration 00:04:35.735 ************************************ 00:04:35.735 END TEST app_repeat 00:04:35.735 ************************************ 00:04:35.735 14:05:36 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.735 14:05:36 -- event/event.sh@42 -- # return 0 00:04:35.735 00:04:35.735 real 0m17.073s 00:04:35.735 user 0m36.590s 00:04:35.735 sys 0m1.973s 00:04:35.735 14:05:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.735 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 14:05:36 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.735 14:05:36 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.735 14:05:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.735 14:05:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.735 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 ************************************ 00:04:35.735 START TEST cpu_locks 00:04:35.735 ************************************ 00:04:35.735 14:05:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:35.735 * Looking for test storage... 00:04:35.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:35.735 14:05:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:35.735 14:05:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:35.735 14:05:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:35.735 14:05:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:35.735 14:05:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:35.735 14:05:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:35.735 14:05:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:35.735 14:05:37 -- scripts/common.sh@335 -- # IFS=.-: 00:04:35.735 14:05:37 -- scripts/common.sh@335 -- # read -ra ver1 00:04:35.735 14:05:37 -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.735 14:05:37 -- scripts/common.sh@336 -- # read -ra ver2 00:04:35.735 14:05:37 -- scripts/common.sh@337 -- # local 'op=<' 00:04:35.735 14:05:37 -- scripts/common.sh@339 -- # ver1_l=2 00:04:35.735 14:05:37 -- scripts/common.sh@340 -- # ver2_l=1 00:04:35.735 14:05:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:35.735 14:05:37 -- scripts/common.sh@343 -- # case "$op" in 00:04:35.735 14:05:37 -- scripts/common.sh@344 -- # : 1 00:04:35.735 14:05:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:35.735 14:05:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.735 14:05:37 -- scripts/common.sh@364 -- # decimal 1 00:04:35.735 14:05:37 -- scripts/common.sh@352 -- # local d=1 00:04:35.735 14:05:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.735 14:05:37 -- scripts/common.sh@354 -- # echo 1 00:04:35.735 14:05:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:35.735 14:05:37 -- scripts/common.sh@365 -- # decimal 2 00:04:35.735 14:05:37 -- scripts/common.sh@352 -- # local d=2 00:04:35.735 14:05:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.735 14:05:37 -- scripts/common.sh@354 -- # echo 2 00:04:35.735 14:05:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:35.735 14:05:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:35.735 14:05:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:35.735 14:05:37 -- scripts/common.sh@367 -- # return 0 00:04:35.735 14:05:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.735 14:05:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:35.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.735 --rc genhtml_branch_coverage=1 00:04:35.735 --rc genhtml_function_coverage=1 00:04:35.735 --rc genhtml_legend=1 00:04:35.735 --rc geninfo_all_blocks=1 00:04:35.735 --rc geninfo_unexecuted_blocks=1 00:04:35.735 00:04:35.735 ' 00:04:35.735 14:05:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:35.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.735 --rc genhtml_branch_coverage=1 00:04:35.735 --rc genhtml_function_coverage=1 00:04:35.735 --rc genhtml_legend=1 00:04:35.735 --rc geninfo_all_blocks=1 00:04:35.735 --rc geninfo_unexecuted_blocks=1 00:04:35.735 00:04:35.735 ' 00:04:35.735 14:05:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:35.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.735 --rc genhtml_branch_coverage=1 00:04:35.735 --rc genhtml_function_coverage=1 00:04:35.735 --rc genhtml_legend=1 00:04:35.735 --rc geninfo_all_blocks=1 00:04:35.735 --rc geninfo_unexecuted_blocks=1 00:04:35.735 00:04:35.735 ' 00:04:35.735 14:05:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:35.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.735 --rc genhtml_branch_coverage=1 00:04:35.735 --rc genhtml_function_coverage=1 00:04:35.735 --rc genhtml_legend=1 00:04:35.735 --rc geninfo_all_blocks=1 00:04:35.735 --rc geninfo_unexecuted_blocks=1 00:04:35.735 00:04:35.735 ' 00:04:35.735 14:05:37 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.735 14:05:37 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.735 14:05:37 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.735 14:05:37 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.735 14:05:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.735 14:05:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.735 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 ************************************ 00:04:35.735 START TEST default_locks 00:04:35.735 ************************************ 00:04:35.735 14:05:37 -- common/autotest_common.sh@1114 -- # default_locks 00:04:35.735 14:05:37 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57502 00:04:35.735 14:05:37 -- event/cpu_locks.sh@47 -- # waitforlisten 57502 00:04:35.735 14:05:37 -- common/autotest_common.sh@829 -- # '[' -z 57502 ']' 00:04:35.735 14:05:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.735 14:05:37 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.735 14:05:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.735 14:05:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.735 14:05:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.735 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:35.735 [2024-12-04 14:05:37.197318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.735 [2024-12-04 14:05:37.197442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57502 ] 00:04:35.994 [2024-12-04 14:05:37.346100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.253 [2024-12-04 14:05:37.482707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.253 [2024-12-04 14:05:37.482858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.819 14:05:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.819 14:05:37 -- common/autotest_common.sh@862 -- # return 0 00:04:36.819 14:05:37 -- event/cpu_locks.sh@49 -- # locks_exist 57502 00:04:36.819 14:05:37 -- event/cpu_locks.sh@22 -- # lslocks -p 57502 00:04:36.819 14:05:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.819 14:05:38 -- event/cpu_locks.sh@50 -- # killprocess 57502 00:04:36.819 14:05:38 -- common/autotest_common.sh@936 -- # '[' -z 57502 ']' 00:04:36.819 14:05:38 -- common/autotest_common.sh@940 -- # kill -0 57502 00:04:36.819 14:05:38 -- common/autotest_common.sh@941 -- # uname 00:04:36.819 14:05:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.819 14:05:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57502 00:04:36.819 14:05:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.819 14:05:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.819 killing process with pid 57502 00:04:36.819 14:05:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57502' 00:04:36.819 14:05:38 -- common/autotest_common.sh@955 -- # kill 57502 00:04:36.819 14:05:38 -- common/autotest_common.sh@960 -- # wait 57502 00:04:38.195 14:05:39 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57502 00:04:38.195 14:05:39 -- common/autotest_common.sh@650 -- # local es=0 00:04:38.195 14:05:39 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57502 00:04:38.195 14:05:39 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:38.195 14:05:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.195 14:05:39 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:38.195 14:05:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.195 14:05:39 -- common/autotest_common.sh@653 -- # waitforlisten 57502 00:04:38.195 14:05:39 -- common/autotest_common.sh@829 -- # '[' -z 57502 ']' 00:04:38.195 14:05:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.195 14:05:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.195 14:05:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.195 14:05:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.195 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.195 ERROR: process (pid: 57502) is no longer running 00:04:38.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57502) - No such process 00:04:38.195 14:05:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.195 14:05:39 -- common/autotest_common.sh@862 -- # return 1 00:04:38.195 14:05:39 -- common/autotest_common.sh@653 -- # es=1 00:04:38.195 14:05:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.195 14:05:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.195 14:05:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.195 14:05:39 -- event/cpu_locks.sh@54 -- # no_locks 00:04:38.195 14:05:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.195 14:05:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.195 14:05:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.195 00:04:38.195 real 0m2.276s 00:04:38.195 user 0m2.250s 00:04:38.195 sys 0m0.423s 00:04:38.195 14:05:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.195 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.195 ************************************ 00:04:38.195 END TEST default_locks 00:04:38.195 ************************************ 00:04:38.195 14:05:39 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:38.195 14:05:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.195 14:05:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.195 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.195 ************************************ 00:04:38.195 START TEST default_locks_via_rpc 00:04:38.195 ************************************ 00:04:38.195 14:05:39 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:04:38.195 14:05:39 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57560 00:04:38.195 14:05:39 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.195 14:05:39 -- event/cpu_locks.sh@63 -- # waitforlisten 57560 00:04:38.195 14:05:39 -- common/autotest_common.sh@829 -- # '[' -z 57560 ']' 00:04:38.195 14:05:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.196 14:05:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.196 14:05:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.196 14:05:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.196 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:04:38.196 [2024-12-04 14:05:39.514701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.196 [2024-12-04 14:05:39.514809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57560 ] 00:04:38.456 [2024-12-04 14:05:39.666177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.456 [2024-12-04 14:05:39.886385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.456 [2024-12-04 14:05:39.886606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.840 14:05:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.840 14:05:41 -- common/autotest_common.sh@862 -- # return 0 00:04:39.840 14:05:41 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:39.840 14:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.840 14:05:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.840 14:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.840 14:05:41 -- event/cpu_locks.sh@67 -- # no_locks 00:04:39.840 14:05:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.840 14:05:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.840 14:05:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.840 14:05:41 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:39.840 14:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.840 14:05:41 -- common/autotest_common.sh@10 -- # set +x 00:04:39.840 14:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.840 14:05:41 -- event/cpu_locks.sh@71 -- # locks_exist 57560 00:04:39.840 14:05:41 -- event/cpu_locks.sh@22 -- # lslocks -p 57560 00:04:39.840 14:05:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.099 14:05:41 -- event/cpu_locks.sh@73 -- # killprocess 57560 00:04:40.099 14:05:41 -- common/autotest_common.sh@936 -- # '[' -z 57560 ']' 00:04:40.099 14:05:41 -- common/autotest_common.sh@940 -- # kill -0 57560 00:04:40.099 14:05:41 -- common/autotest_common.sh@941 -- # uname 00:04:40.099 14:05:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.099 14:05:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57560 00:04:40.099 14:05:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.099 killing process with pid 57560 00:04:40.099 14:05:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.099 14:05:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57560' 00:04:40.099 14:05:41 -- common/autotest_common.sh@955 -- # kill 57560 00:04:40.099 14:05:41 -- common/autotest_common.sh@960 -- # wait 57560 00:04:41.476 00:04:41.476 real 0m3.061s 00:04:41.476 user 0m3.147s 00:04:41.476 sys 0m0.564s 00:04:41.476 14:05:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.476 ************************************ 00:04:41.476 END TEST default_locks_via_rpc 00:04:41.476 ************************************ 00:04:41.476 14:05:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 14:05:42 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.476 14:05:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.476 14:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.476 14:05:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 ************************************ 00:04:41.476 START TEST non_locking_app_on_locked_coremask 00:04:41.476 ************************************ 00:04:41.476 14:05:42 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:04:41.476 14:05:42 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57625 00:04:41.476 14:05:42 -- event/cpu_locks.sh@81 -- # waitforlisten 57625 /var/tmp/spdk.sock 00:04:41.476 14:05:42 -- common/autotest_common.sh@829 -- # '[' -z 57625 ']' 00:04:41.476 14:05:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.476 14:05:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.476 14:05:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.476 14:05:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.476 14:05:42 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.476 14:05:42 -- common/autotest_common.sh@10 -- # set +x 00:04:41.476 [2024-12-04 14:05:42.643363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.476 [2024-12-04 14:05:42.643490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:04:41.476 [2024-12-04 14:05:42.795444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.735 [2024-12-04 14:05:42.946386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.735 [2024-12-04 14:05:42.946553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.994 14:05:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.994 14:05:43 -- common/autotest_common.sh@862 -- # return 0 00:04:41.994 14:05:43 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57641 00:04:41.994 14:05:43 -- event/cpu_locks.sh@85 -- # waitforlisten 57641 /var/tmp/spdk2.sock 00:04:41.994 14:05:43 -- common/autotest_common.sh@829 -- # '[' -z 57641 ']' 00:04:41.994 14:05:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.994 14:05:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.994 14:05:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.994 14:05:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.994 14:05:43 -- common/autotest_common.sh@10 -- # set +x 00:04:41.994 14:05:43 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:42.254 [2024-12-04 14:05:43.511756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.254 [2024-12-04 14:05:43.511859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57641 ] 00:04:42.254 [2024-12-04 14:05:43.658027] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.254 [2024-12-04 14:05:43.658061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.514 [2024-12-04 14:05:43.937635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.514 [2024-12-04 14:05:43.937783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.890 14:05:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.890 14:05:45 -- common/autotest_common.sh@862 -- # return 0 00:04:43.890 14:05:45 -- event/cpu_locks.sh@87 -- # locks_exist 57625 00:04:43.890 14:05:45 -- event/cpu_locks.sh@22 -- # lslocks -p 57625 00:04:43.890 14:05:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.890 14:05:45 -- event/cpu_locks.sh@89 -- # killprocess 57625 00:04:43.890 14:05:45 -- common/autotest_common.sh@936 -- # '[' -z 57625 ']' 00:04:43.890 14:05:45 -- common/autotest_common.sh@940 -- # kill -0 57625 00:04:43.890 14:05:45 -- common/autotest_common.sh@941 -- # uname 00:04:43.890 14:05:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.890 14:05:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57625 00:04:43.890 14:05:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.890 killing process with pid 57625 00:04:43.890 14:05:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.890 14:05:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57625' 00:04:43.890 14:05:45 -- common/autotest_common.sh@955 -- # kill 57625 00:04:43.890 14:05:45 -- common/autotest_common.sh@960 -- # wait 57625 00:04:46.423 14:05:47 -- event/cpu_locks.sh@90 -- # killprocess 57641 00:04:46.423 14:05:47 -- common/autotest_common.sh@936 -- # '[' -z 57641 ']' 00:04:46.423 14:05:47 -- common/autotest_common.sh@940 -- # kill -0 57641 00:04:46.423 14:05:47 -- common/autotest_common.sh@941 -- # uname 00:04:46.423 14:05:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.423 14:05:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57641 00:04:46.423 14:05:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.423 killing process with pid 57641 00:04:46.423 14:05:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.423 14:05:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57641' 00:04:46.423 14:05:47 -- common/autotest_common.sh@955 -- # kill 57641 00:04:46.423 14:05:47 -- common/autotest_common.sh@960 -- # wait 57641 00:04:47.851 00:04:47.851 real 0m6.268s 00:04:47.851 user 0m6.614s 00:04:47.851 sys 0m0.828s 00:04:47.851 14:05:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.851 ************************************ 00:04:47.851 END TEST non_locking_app_on_locked_coremask 00:04:47.851 ************************************ 00:04:47.851 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.851 14:05:48 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:47.851 14:05:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.851 14:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.851 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.851 ************************************ 00:04:47.851 START TEST locking_app_on_unlocked_coremask 00:04:47.851 ************************************ 00:04:47.851 14:05:48 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:04:47.851 14:05:48 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57734 00:04:47.851 14:05:48 -- event/cpu_locks.sh@99 -- # waitforlisten 57734 /var/tmp/spdk.sock 00:04:47.851 14:05:48 -- common/autotest_common.sh@829 -- # '[' -z 57734 ']' 00:04:47.851 14:05:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.851 14:05:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.851 14:05:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.851 14:05:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.851 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:04:47.851 14:05:48 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:47.851 [2024-12-04 14:05:48.945281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:47.851 [2024-12-04 14:05:48.945369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57734 ] 00:04:47.851 [2024-12-04 14:05:49.081742] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.851 [2024-12-04 14:05:49.081782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.851 [2024-12-04 14:05:49.226216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.851 [2024-12-04 14:05:49.226364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.432 14:05:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.432 14:05:49 -- common/autotest_common.sh@862 -- # return 0 00:04:48.432 14:05:49 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57750 00:04:48.432 14:05:49 -- event/cpu_locks.sh@103 -- # waitforlisten 57750 /var/tmp/spdk2.sock 00:04:48.432 14:05:49 -- common/autotest_common.sh@829 -- # '[' -z 57750 ']' 00:04:48.432 14:05:49 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:48.432 14:05:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.432 14:05:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.432 14:05:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.432 14:05:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.432 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.432 [2024-12-04 14:05:49.829713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:48.432 [2024-12-04 14:05:49.829821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57750 ] 00:04:48.690 [2024-12-04 14:05:49.977103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.949 [2024-12-04 14:05:50.263510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.949 [2024-12-04 14:05:50.263666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.883 14:05:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.883 14:05:51 -- common/autotest_common.sh@862 -- # return 0 00:04:49.883 14:05:51 -- event/cpu_locks.sh@105 -- # locks_exist 57750 00:04:49.883 14:05:51 -- event/cpu_locks.sh@22 -- # lslocks -p 57750 00:04:49.883 14:05:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:50.141 14:05:51 -- event/cpu_locks.sh@107 -- # killprocess 57734 00:04:50.141 14:05:51 -- common/autotest_common.sh@936 -- # '[' -z 57734 ']' 00:04:50.399 14:05:51 -- common/autotest_common.sh@940 -- # kill -0 57734 00:04:50.399 14:05:51 -- common/autotest_common.sh@941 -- # uname 00:04:50.399 14:05:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.399 14:05:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57734 00:04:50.399 14:05:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.399 killing process with pid 57734 00:04:50.399 14:05:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.399 14:05:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57734' 00:04:50.399 14:05:51 -- common/autotest_common.sh@955 -- # kill 57734 00:04:50.399 14:05:51 -- common/autotest_common.sh@960 -- # wait 57734 00:04:52.931 14:05:54 -- event/cpu_locks.sh@108 -- # killprocess 57750 00:04:52.931 14:05:54 -- common/autotest_common.sh@936 -- # '[' -z 57750 ']' 00:04:52.931 14:05:54 -- common/autotest_common.sh@940 -- # kill -0 57750 00:04:52.931 14:05:54 -- common/autotest_common.sh@941 -- # uname 00:04:52.931 14:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.931 14:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57750 00:04:52.931 14:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:52.931 killing process with pid 57750 00:04:52.931 14:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:52.931 14:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57750' 00:04:52.931 14:05:54 -- common/autotest_common.sh@955 -- # kill 57750 00:04:52.931 14:05:54 -- common/autotest_common.sh@960 -- # wait 57750 00:04:53.868 00:04:53.868 real 0m6.328s 00:04:53.868 user 0m6.725s 00:04:53.868 sys 0m0.792s 00:04:53.868 14:05:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.868 ************************************ 00:04:53.868 END TEST locking_app_on_unlocked_coremask 00:04:53.868 14:05:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.868 ************************************ 00:04:53.868 14:05:55 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:53.868 14:05:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.868 14:05:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.868 14:05:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.868 ************************************ 00:04:53.868 START TEST locking_app_on_locked_coremask 00:04:53.868 ************************************ 00:04:53.868 14:05:55 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:04:53.868 14:05:55 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57849 00:04:53.868 14:05:55 -- event/cpu_locks.sh@116 -- # waitforlisten 57849 /var/tmp/spdk.sock 00:04:53.868 14:05:55 -- common/autotest_common.sh@829 -- # '[' -z 57849 ']' 00:04:53.868 14:05:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.868 14:05:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.868 14:05:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.868 14:05:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.868 14:05:55 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.868 14:05:55 -- common/autotest_common.sh@10 -- # set +x 00:04:53.868 [2024-12-04 14:05:55.329216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.868 [2024-12-04 14:05:55.329335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57849 ] 00:04:54.128 [2024-12-04 14:05:55.477236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.388 [2024-12-04 14:05:55.678832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.389 [2024-12-04 14:05:55.679067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.775 14:05:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.775 14:05:56 -- common/autotest_common.sh@862 -- # return 0 00:04:55.775 14:05:56 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57871 00:04:55.775 14:05:56 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57871 /var/tmp/spdk2.sock 00:04:55.775 14:05:56 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.775 14:05:56 -- common/autotest_common.sh@650 -- # local es=0 00:04:55.775 14:05:56 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57871 /var/tmp/spdk2.sock 00:04:55.775 14:05:56 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:55.776 14:05:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.776 14:05:56 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:55.776 14:05:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.776 14:05:56 -- common/autotest_common.sh@653 -- # waitforlisten 57871 /var/tmp/spdk2.sock 00:04:55.776 14:05:56 -- common/autotest_common.sh@829 -- # '[' -z 57871 ']' 00:04:55.776 14:05:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.776 14:05:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.776 14:05:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.776 14:05:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.776 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.776 [2024-12-04 14:05:56.930356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:55.776 [2024-12-04 14:05:56.930475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57871 ] 00:04:55.776 [2024-12-04 14:05:57.082628] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57849 has claimed it. 00:04:55.776 [2024-12-04 14:05:57.082687] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.343 ERROR: process (pid: 57871) is no longer running 00:04:56.343 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57871) - No such process 00:04:56.343 14:05:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.343 14:05:57 -- common/autotest_common.sh@862 -- # return 1 00:04:56.343 14:05:57 -- common/autotest_common.sh@653 -- # es=1 00:04:56.343 14:05:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.343 14:05:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.343 14:05:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.343 14:05:57 -- event/cpu_locks.sh@122 -- # locks_exist 57849 00:04:56.343 14:05:57 -- event/cpu_locks.sh@22 -- # lslocks -p 57849 00:04:56.343 14:05:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.343 14:05:57 -- event/cpu_locks.sh@124 -- # killprocess 57849 00:04:56.343 14:05:57 -- common/autotest_common.sh@936 -- # '[' -z 57849 ']' 00:04:56.343 14:05:57 -- common/autotest_common.sh@940 -- # kill -0 57849 00:04:56.343 14:05:57 -- common/autotest_common.sh@941 -- # uname 00:04:56.343 14:05:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.343 14:05:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57849 00:04:56.343 14:05:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.343 killing process with pid 57849 00:04:56.343 14:05:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.343 14:05:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57849' 00:04:56.343 14:05:57 -- common/autotest_common.sh@955 -- # kill 57849 00:04:56.343 14:05:57 -- common/autotest_common.sh@960 -- # wait 57849 00:04:57.720 00:04:57.720 real 0m3.609s 00:04:57.720 user 0m3.854s 00:04:57.720 sys 0m0.632s 00:04:57.720 14:05:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.720 14:05:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.720 ************************************ 00:04:57.720 END TEST locking_app_on_locked_coremask 00:04:57.720 ************************************ 00:04:57.720 14:05:58 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:57.720 14:05:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.720 14:05:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.720 14:05:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.720 ************************************ 00:04:57.720 START TEST locking_overlapped_coremask 00:04:57.720 ************************************ 00:04:57.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.720 14:05:58 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:04:57.720 14:05:58 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57924 00:04:57.720 14:05:58 -- event/cpu_locks.sh@133 -- # waitforlisten 57924 /var/tmp/spdk.sock 00:04:57.720 14:05:58 -- common/autotest_common.sh@829 -- # '[' -z 57924 ']' 00:04:57.720 14:05:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.720 14:05:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.720 14:05:58 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:57.720 14:05:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.720 14:05:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.720 14:05:58 -- common/autotest_common.sh@10 -- # set +x 00:04:57.720 [2024-12-04 14:05:58.996362] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.720 [2024-12-04 14:05:58.996487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57924 ] 00:04:57.720 [2024-12-04 14:05:59.148288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.982 [2024-12-04 14:05:59.373473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:57.982 [2024-12-04 14:05:59.373940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.982 [2024-12-04 14:05:59.374280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.982 [2024-12-04 14:05:59.374369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.369 14:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.369 14:06:00 -- common/autotest_common.sh@862 -- # return 0 00:04:59.369 14:06:00 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57951 00:04:59.369 14:06:00 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57951 /var/tmp/spdk2.sock 00:04:59.369 14:06:00 -- common/autotest_common.sh@650 -- # local es=0 00:04:59.369 14:06:00 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57951 /var/tmp/spdk2.sock 00:04:59.369 14:06:00 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:59.369 14:06:00 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:59.369 14:06:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.369 14:06:00 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:59.369 14:06:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.369 14:06:00 -- common/autotest_common.sh@653 -- # waitforlisten 57951 /var/tmp/spdk2.sock 00:04:59.369 14:06:00 -- common/autotest_common.sh@829 -- # '[' -z 57951 ']' 00:04:59.369 14:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.369 14:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.369 14:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.369 14:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.369 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:04:59.369 [2024-12-04 14:06:00.606575] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.369 [2024-12-04 14:06:00.606713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 00:04:59.369 [2024-12-04 14:06:00.758424] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57924 has claimed it. 00:04:59.369 [2024-12-04 14:06:00.762134] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.934 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57951) - No such process 00:04:59.934 ERROR: process (pid: 57951) is no longer running 00:04:59.934 14:06:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.934 14:06:01 -- common/autotest_common.sh@862 -- # return 1 00:04:59.934 14:06:01 -- common/autotest_common.sh@653 -- # es=1 00:04:59.934 14:06:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.934 14:06:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.934 14:06:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.934 14:06:01 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:59.934 14:06:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:59.934 14:06:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:59.934 14:06:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:59.934 14:06:01 -- event/cpu_locks.sh@141 -- # killprocess 57924 00:04:59.934 14:06:01 -- common/autotest_common.sh@936 -- # '[' -z 57924 ']' 00:04:59.934 14:06:01 -- common/autotest_common.sh@940 -- # kill -0 57924 00:04:59.934 14:06:01 -- common/autotest_common.sh@941 -- # uname 00:04:59.934 14:06:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.934 14:06:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57924 00:04:59.934 14:06:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.934 14:06:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.934 14:06:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57924' 00:04:59.934 killing process with pid 57924 00:04:59.934 14:06:01 -- common/autotest_common.sh@955 -- # kill 57924 00:04:59.934 14:06:01 -- common/autotest_common.sh@960 -- # wait 57924 00:05:01.306 00:05:01.306 real 0m3.509s 00:05:01.306 user 0m9.399s 00:05:01.306 sys 0m0.543s 00:05:01.306 14:06:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.306 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 ************************************ 00:05:01.306 END TEST locking_overlapped_coremask 00:05:01.306 ************************************ 00:05:01.306 14:06:02 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:01.306 14:06:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.306 14:06:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.306 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 ************************************ 00:05:01.306 START TEST locking_overlapped_coremask_via_rpc 00:05:01.306 ************************************ 00:05:01.306 14:06:02 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:01.306 14:06:02 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58004 00:05:01.306 14:06:02 -- event/cpu_locks.sh@149 -- # waitforlisten 58004 /var/tmp/spdk.sock 00:05:01.306 14:06:02 -- common/autotest_common.sh@829 -- # '[' -z 58004 ']' 00:05:01.306 14:06:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.306 14:06:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.306 14:06:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.306 14:06:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.306 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:01.306 14:06:02 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:01.306 [2024-12-04 14:06:02.549077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.306 [2024-12-04 14:06:02.549208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58004 ] 00:05:01.306 [2024-12-04 14:06:02.695843] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.306 [2024-12-04 14:06:02.695878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.563 [2024-12-04 14:06:02.834819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:01.563 [2024-12-04 14:06:02.835166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.563 [2024-12-04 14:06:02.835406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.563 [2024-12-04 14:06:02.835510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.128 14:06:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.128 14:06:03 -- common/autotest_common.sh@862 -- # return 0 00:05:02.128 14:06:03 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58016 00:05:02.128 14:06:03 -- event/cpu_locks.sh@153 -- # waitforlisten 58016 /var/tmp/spdk2.sock 00:05:02.128 14:06:03 -- common/autotest_common.sh@829 -- # '[' -z 58016 ']' 00:05:02.128 14:06:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.128 14:06:03 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:02.128 14:06:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.128 14:06:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.128 14:06:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.128 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:02.128 [2024-12-04 14:06:03.383642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:02.128 [2024-12-04 14:06:03.383739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58016 ] 00:05:02.128 [2024-12-04 14:06:03.529883] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.128 [2024-12-04 14:06:03.529918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.385 [2024-12-04 14:06:03.810327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:02.385 [2024-12-04 14:06:03.810693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.385 [2024-12-04 14:06:03.814169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.385 [2024-12-04 14:06:03.814197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:03.759 14:06:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.759 14:06:04 -- common/autotest_common.sh@862 -- # return 0 00:05:03.759 14:06:04 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.759 14:06:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.759 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 14:06:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.759 14:06:04 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.759 14:06:04 -- common/autotest_common.sh@650 -- # local es=0 00:05:03.759 14:06:04 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.759 14:06:04 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:03.759 14:06:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.759 14:06:04 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:03.759 14:06:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.759 14:06:04 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.759 14:06:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.759 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 [2024-12-04 14:06:04.903213] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58004 has claimed it. 00:05:03.759 request: 00:05:03.759 { 00:05:03.759 "method": "framework_enable_cpumask_locks", 00:05:03.759 "req_id": 1 00:05:03.759 } 00:05:03.759 Got JSON-RPC error response 00:05:03.759 response: 00:05:03.759 { 00:05:03.759 "code": -32603, 00:05:03.759 "message": "Failed to claim CPU core: 2" 00:05:03.759 } 00:05:03.759 14:06:04 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:03.759 14:06:04 -- common/autotest_common.sh@653 -- # es=1 00:05:03.759 14:06:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.759 14:06:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:03.759 14:06:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.759 14:06:04 -- event/cpu_locks.sh@158 -- # waitforlisten 58004 /var/tmp/spdk.sock 00:05:03.759 14:06:04 -- common/autotest_common.sh@829 -- # '[' -z 58004 ']' 00:05:03.759 14:06:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.759 14:06:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.759 14:06:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.759 14:06:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.759 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:03.759 14:06:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.759 14:06:05 -- common/autotest_common.sh@862 -- # return 0 00:05:03.759 14:06:05 -- event/cpu_locks.sh@159 -- # waitforlisten 58016 /var/tmp/spdk2.sock 00:05:03.759 14:06:05 -- common/autotest_common.sh@829 -- # '[' -z 58016 ']' 00:05:03.759 14:06:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.759 14:06:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.759 14:06:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.759 14:06:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.759 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.017 14:06:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.017 14:06:05 -- common/autotest_common.sh@862 -- # return 0 00:05:04.017 14:06:05 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:04.017 14:06:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:04.017 14:06:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:04.017 14:06:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:04.017 00:05:04.017 real 0m2.816s 00:05:04.017 user 0m1.118s 00:05:04.017 sys 0m0.128s 00:05:04.017 14:06:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:04.017 ************************************ 00:05:04.017 END TEST locking_overlapped_coremask_via_rpc 00:05:04.017 ************************************ 00:05:04.017 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:04.017 14:06:05 -- event/cpu_locks.sh@174 -- # cleanup 00:05:04.017 14:06:05 -- event/cpu_locks.sh@15 -- # [[ -z 58004 ]] 00:05:04.017 14:06:05 -- event/cpu_locks.sh@15 -- # killprocess 58004 00:05:04.017 14:06:05 -- common/autotest_common.sh@936 -- # '[' -z 58004 ']' 00:05:04.017 14:06:05 -- common/autotest_common.sh@940 -- # kill -0 58004 00:05:04.017 14:06:05 -- common/autotest_common.sh@941 -- # uname 00:05:04.017 14:06:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.017 14:06:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58004 00:05:04.017 killing process with pid 58004 00:05:04.017 14:06:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.017 14:06:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.017 14:06:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58004' 00:05:04.017 14:06:05 -- common/autotest_common.sh@955 -- # kill 58004 00:05:04.017 14:06:05 -- common/autotest_common.sh@960 -- # wait 58004 00:05:05.388 14:06:06 -- event/cpu_locks.sh@16 -- # [[ -z 58016 ]] 00:05:05.388 14:06:06 -- event/cpu_locks.sh@16 -- # killprocess 58016 00:05:05.388 14:06:06 -- common/autotest_common.sh@936 -- # '[' -z 58016 ']' 00:05:05.388 14:06:06 -- common/autotest_common.sh@940 -- # kill -0 58016 00:05:05.388 14:06:06 -- common/autotest_common.sh@941 -- # uname 00:05:05.388 14:06:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.388 14:06:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58016 00:05:05.388 killing process with pid 58016 00:05:05.388 14:06:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:05.388 14:06:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:05.388 14:06:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58016' 00:05:05.388 14:06:06 -- common/autotest_common.sh@955 -- # kill 58016 00:05:05.388 14:06:06 -- common/autotest_common.sh@960 -- # wait 58016 00:05:06.323 14:06:07 -- event/cpu_locks.sh@18 -- # rm -f 00:05:06.323 Process with pid 58004 is not found 00:05:06.323 Process with pid 58016 is not found 00:05:06.323 14:06:07 -- event/cpu_locks.sh@1 -- # cleanup 00:05:06.323 14:06:07 -- event/cpu_locks.sh@15 -- # [[ -z 58004 ]] 00:05:06.323 14:06:07 -- event/cpu_locks.sh@15 -- # killprocess 58004 00:05:06.323 14:06:07 -- common/autotest_common.sh@936 -- # '[' -z 58004 ']' 00:05:06.323 14:06:07 -- common/autotest_common.sh@940 -- # kill -0 58004 00:05:06.323 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58004) - No such process 00:05:06.323 14:06:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58004 is not found' 00:05:06.324 14:06:07 -- event/cpu_locks.sh@16 -- # [[ -z 58016 ]] 00:05:06.324 14:06:07 -- event/cpu_locks.sh@16 -- # killprocess 58016 00:05:06.324 14:06:07 -- common/autotest_common.sh@936 -- # '[' -z 58016 ']' 00:05:06.324 14:06:07 -- common/autotest_common.sh@940 -- # kill -0 58016 00:05:06.324 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58016) - No such process 00:05:06.324 14:06:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58016 is not found' 00:05:06.324 14:06:07 -- event/cpu_locks.sh@18 -- # rm -f 00:05:06.324 ************************************ 00:05:06.324 END TEST cpu_locks 00:05:06.324 ************************************ 00:05:06.324 00:05:06.324 real 0m30.734s 00:05:06.324 user 0m52.393s 00:05:06.324 sys 0m4.664s 00:05:06.324 14:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.324 14:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.324 ************************************ 00:05:06.324 END TEST event 00:05:06.324 ************************************ 00:05:06.324 00:05:06.324 real 0m56.328s 00:05:06.324 user 1m41.302s 00:05:06.324 sys 0m7.444s 00:05:06.324 14:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.324 14:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.324 14:06:07 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:06.324 14:06:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.324 14:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.324 14:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.585 ************************************ 00:05:06.585 START TEST thread 00:05:06.585 ************************************ 00:05:06.585 14:06:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:06.585 * Looking for test storage... 00:05:06.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:06.585 14:06:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:06.585 14:06:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:06.585 14:06:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:06.585 14:06:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:06.586 14:06:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:06.586 14:06:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:06.586 14:06:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:06.586 14:06:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:06.586 14:06:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:06.586 14:06:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.586 14:06:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:06.586 14:06:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:06.586 14:06:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:06.586 14:06:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:06.586 14:06:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:06.586 14:06:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:06.586 14:06:07 -- scripts/common.sh@344 -- # : 1 00:05:06.586 14:06:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:06.586 14:06:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.586 14:06:07 -- scripts/common.sh@364 -- # decimal 1 00:05:06.586 14:06:07 -- scripts/common.sh@352 -- # local d=1 00:05:06.586 14:06:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.586 14:06:07 -- scripts/common.sh@354 -- # echo 1 00:05:06.586 14:06:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:06.586 14:06:07 -- scripts/common.sh@365 -- # decimal 2 00:05:06.586 14:06:07 -- scripts/common.sh@352 -- # local d=2 00:05:06.586 14:06:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.586 14:06:07 -- scripts/common.sh@354 -- # echo 2 00:05:06.586 14:06:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:06.586 14:06:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:06.586 14:06:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:06.586 14:06:07 -- scripts/common.sh@367 -- # return 0 00:05:06.586 14:06:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.586 14:06:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:06.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.586 --rc genhtml_branch_coverage=1 00:05:06.586 --rc genhtml_function_coverage=1 00:05:06.586 --rc genhtml_legend=1 00:05:06.586 --rc geninfo_all_blocks=1 00:05:06.586 --rc geninfo_unexecuted_blocks=1 00:05:06.586 00:05:06.586 ' 00:05:06.586 14:06:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:06.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.586 --rc genhtml_branch_coverage=1 00:05:06.586 --rc genhtml_function_coverage=1 00:05:06.586 --rc genhtml_legend=1 00:05:06.586 --rc geninfo_all_blocks=1 00:05:06.586 --rc geninfo_unexecuted_blocks=1 00:05:06.586 00:05:06.586 ' 00:05:06.586 14:06:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:06.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.586 --rc genhtml_branch_coverage=1 00:05:06.586 --rc genhtml_function_coverage=1 00:05:06.586 --rc genhtml_legend=1 00:05:06.586 --rc geninfo_all_blocks=1 00:05:06.586 --rc geninfo_unexecuted_blocks=1 00:05:06.586 00:05:06.586 ' 00:05:06.586 14:06:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:06.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.586 --rc genhtml_branch_coverage=1 00:05:06.586 --rc genhtml_function_coverage=1 00:05:06.586 --rc genhtml_legend=1 00:05:06.586 --rc geninfo_all_blocks=1 00:05:06.586 --rc geninfo_unexecuted_blocks=1 00:05:06.586 00:05:06.586 ' 00:05:06.586 14:06:07 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.586 14:06:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:06.586 14:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.586 14:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.586 ************************************ 00:05:06.586 START TEST thread_poller_perf 00:05:06.586 ************************************ 00:05:06.586 14:06:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:06.586 [2024-12-04 14:06:07.959057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.586 [2024-12-04 14:06:07.959272] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:05:06.848 [2024-12-04 14:06:08.109378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.848 [2024-12-04 14:06:08.284974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.848 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:08.236 [2024-12-04T14:06:09.701Z] ====================================== 00:05:08.236 [2024-12-04T14:06:09.701Z] busy:2616604986 (cyc) 00:05:08.236 [2024-12-04T14:06:09.701Z] total_run_count: 294000 00:05:08.236 [2024-12-04T14:06:09.701Z] tsc_hz: 2600000000 (cyc) 00:05:08.236 [2024-12-04T14:06:09.701Z] ====================================== 00:05:08.236 [2024-12-04T14:06:09.701Z] poller_cost: 8900 (cyc), 3423 (nsec) 00:05:08.236 00:05:08.236 real 0m1.631s 00:05:08.236 user 0m1.438s 00:05:08.236 sys 0m0.082s 00:05:08.236 14:06:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.236 ************************************ 00:05:08.236 END TEST thread_poller_perf 00:05:08.236 ************************************ 00:05:08.236 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.236 14:06:09 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.236 14:06:09 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:08.236 14:06:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.236 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:08.236 ************************************ 00:05:08.236 START TEST thread_poller_perf 00:05:08.236 ************************************ 00:05:08.236 14:06:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:08.236 [2024-12-04 14:06:09.640633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.236 [2024-12-04 14:06:09.640845] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58214 ] 00:05:08.497 [2024-12-04 14:06:09.788853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.757 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:08.758 [2024-12-04 14:06:09.961600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.146 [2024-12-04T14:06:11.611Z] ====================================== 00:05:10.146 [2024-12-04T14:06:11.611Z] busy:2604552528 (cyc) 00:05:10.146 [2024-12-04T14:06:11.611Z] total_run_count: 3973000 00:05:10.146 [2024-12-04T14:06:11.611Z] tsc_hz: 2600000000 (cyc) 00:05:10.146 [2024-12-04T14:06:11.611Z] ====================================== 00:05:10.146 [2024-12-04T14:06:11.611Z] poller_cost: 655 (cyc), 251 (nsec) 00:05:10.146 00:05:10.146 real 0m1.612s 00:05:10.146 user 0m1.426s 00:05:10.146 sys 0m0.078s 00:05:10.146 14:06:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.146 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.146 ************************************ 00:05:10.146 END TEST thread_poller_perf 00:05:10.146 ************************************ 00:05:10.146 14:06:11 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:10.146 ************************************ 00:05:10.146 END TEST thread 00:05:10.146 ************************************ 00:05:10.146 00:05:10.146 real 0m3.473s 00:05:10.146 user 0m2.977s 00:05:10.146 sys 0m0.268s 00:05:10.146 14:06:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.146 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.146 14:06:11 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:10.146 14:06:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.146 14:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.146 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.146 ************************************ 00:05:10.146 START TEST accel 00:05:10.146 ************************************ 00:05:10.146 14:06:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:10.146 * Looking for test storage... 00:05:10.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:10.146 14:06:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.146 14:06:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.146 14:06:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.146 14:06:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.146 14:06:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.146 14:06:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.146 14:06:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.146 14:06:11 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.146 14:06:11 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.146 14:06:11 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.146 14:06:11 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.146 14:06:11 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.146 14:06:11 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.146 14:06:11 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.146 14:06:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.146 14:06:11 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.146 14:06:11 -- scripts/common.sh@344 -- # : 1 00:05:10.146 14:06:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.146 14:06:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.146 14:06:11 -- scripts/common.sh@364 -- # decimal 1 00:05:10.146 14:06:11 -- scripts/common.sh@352 -- # local d=1 00:05:10.146 14:06:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.146 14:06:11 -- scripts/common.sh@354 -- # echo 1 00:05:10.146 14:06:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.146 14:06:11 -- scripts/common.sh@365 -- # decimal 2 00:05:10.146 14:06:11 -- scripts/common.sh@352 -- # local d=2 00:05:10.146 14:06:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.146 14:06:11 -- scripts/common.sh@354 -- # echo 2 00:05:10.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.146 14:06:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.146 14:06:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.146 14:06:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.146 14:06:11 -- scripts/common.sh@367 -- # return 0 00:05:10.146 14:06:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.146 14:06:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.146 --rc genhtml_branch_coverage=1 00:05:10.146 --rc genhtml_function_coverage=1 00:05:10.146 --rc genhtml_legend=1 00:05:10.146 --rc geninfo_all_blocks=1 00:05:10.146 --rc geninfo_unexecuted_blocks=1 00:05:10.146 00:05:10.146 ' 00:05:10.146 14:06:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.146 --rc genhtml_branch_coverage=1 00:05:10.146 --rc genhtml_function_coverage=1 00:05:10.146 --rc genhtml_legend=1 00:05:10.146 --rc geninfo_all_blocks=1 00:05:10.146 --rc geninfo_unexecuted_blocks=1 00:05:10.146 00:05:10.146 ' 00:05:10.146 14:06:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.146 --rc genhtml_branch_coverage=1 00:05:10.146 --rc genhtml_function_coverage=1 00:05:10.146 --rc genhtml_legend=1 00:05:10.146 --rc geninfo_all_blocks=1 00:05:10.146 --rc geninfo_unexecuted_blocks=1 00:05:10.146 00:05:10.146 ' 00:05:10.146 14:06:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.146 --rc genhtml_branch_coverage=1 00:05:10.146 --rc genhtml_function_coverage=1 00:05:10.146 --rc genhtml_legend=1 00:05:10.146 --rc geninfo_all_blocks=1 00:05:10.146 --rc geninfo_unexecuted_blocks=1 00:05:10.146 00:05:10.146 ' 00:05:10.146 14:06:11 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:10.146 14:06:11 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:10.146 14:06:11 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:10.146 14:06:11 -- accel/accel.sh@59 -- # spdk_tgt_pid=58296 00:05:10.146 14:06:11 -- accel/accel.sh@60 -- # waitforlisten 58296 00:05:10.146 14:06:11 -- common/autotest_common.sh@829 -- # '[' -z 58296 ']' 00:05:10.146 14:06:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.146 14:06:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.146 14:06:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.146 14:06:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.146 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.146 14:06:11 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:10.146 14:06:11 -- accel/accel.sh@58 -- # build_accel_config 00:05:10.146 14:06:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.146 14:06:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.146 14:06:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.146 14:06:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.146 14:06:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.146 14:06:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.146 14:06:11 -- accel/accel.sh@42 -- # jq -r . 00:05:10.146 [2024-12-04 14:06:11.503916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.146 [2024-12-04 14:06:11.504029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58296 ] 00:05:10.454 [2024-12-04 14:06:11.653455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.454 [2024-12-04 14:06:11.825140] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.454 [2024-12-04 14:06:11.825349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.836 14:06:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.836 14:06:12 -- common/autotest_common.sh@862 -- # return 0 00:05:11.836 14:06:12 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:11.836 14:06:12 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:11.836 14:06:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.836 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.836 14:06:12 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:11.836 14:06:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:12 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:13 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.836 14:06:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.836 14:06:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.836 14:06:13 -- accel/accel.sh@64 -- # IFS== 00:05:11.836 14:06:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.837 14:06:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.837 14:06:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # IFS== 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.837 14:06:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.837 14:06:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # IFS== 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.837 14:06:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.837 14:06:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # IFS== 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.837 14:06:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.837 14:06:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # IFS== 00:05:11.837 14:06:13 -- accel/accel.sh@64 -- # read -r opc module 00:05:11.837 14:06:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:11.837 14:06:13 -- accel/accel.sh@67 -- # killprocess 58296 00:05:11.837 14:06:13 -- common/autotest_common.sh@936 -- # '[' -z 58296 ']' 00:05:11.837 14:06:13 -- common/autotest_common.sh@940 -- # kill -0 58296 00:05:11.837 14:06:13 -- common/autotest_common.sh@941 -- # uname 00:05:11.837 14:06:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.837 14:06:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58296 00:05:11.837 killing process with pid 58296 00:05:11.837 14:06:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.837 14:06:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.837 14:06:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58296' 00:05:11.837 14:06:13 -- common/autotest_common.sh@955 -- # kill 58296 00:05:11.837 14:06:13 -- common/autotest_common.sh@960 -- # wait 58296 00:05:13.217 14:06:14 -- accel/accel.sh@68 -- # trap - ERR 00:05:13.217 14:06:14 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:13.217 14:06:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:13.217 14:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.217 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.217 14:06:14 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:13.217 14:06:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:13.217 14:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.217 14:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.217 14:06:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.217 14:06:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.217 14:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.217 14:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.217 14:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.217 14:06:14 -- accel/accel.sh@42 -- # jq -r . 00:05:13.217 14:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.217 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.217 14:06:14 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:13.217 14:06:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:13.217 14:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.217 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:13.217 ************************************ 00:05:13.217 START TEST accel_missing_filename 00:05:13.217 ************************************ 00:05:13.217 14:06:14 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:13.217 14:06:14 -- common/autotest_common.sh@650 -- # local es=0 00:05:13.217 14:06:14 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:13.217 14:06:14 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:13.217 14:06:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.217 14:06:14 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:13.217 14:06:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.217 14:06:14 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:13.217 14:06:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:13.217 14:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.217 14:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.217 14:06:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.217 14:06:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.217 14:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.217 14:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.217 14:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.217 14:06:14 -- accel/accel.sh@42 -- # jq -r . 00:05:13.217 [2024-12-04 14:06:14.668359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.217 [2024-12-04 14:06:14.668464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58374 ] 00:05:13.476 [2024-12-04 14:06:14.816179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.734 [2024-12-04 14:06:14.951211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.734 [2024-12-04 14:06:15.060608] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.992 [2024-12-04 14:06:15.315071] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:14.250 A filename is required. 00:05:14.250 14:06:15 -- common/autotest_common.sh@653 -- # es=234 00:05:14.250 14:06:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.250 ************************************ 00:05:14.250 END TEST accel_missing_filename 00:05:14.250 ************************************ 00:05:14.250 14:06:15 -- common/autotest_common.sh@662 -- # es=106 00:05:14.250 14:06:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:14.250 14:06:15 -- common/autotest_common.sh@670 -- # es=1 00:05:14.250 14:06:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.250 00:05:14.250 real 0m0.887s 00:05:14.250 user 0m0.688s 00:05:14.250 sys 0m0.120s 00:05:14.250 14:06:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.250 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.250 14:06:15 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.250 14:06:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:14.250 14:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.250 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.250 ************************************ 00:05:14.250 START TEST accel_compress_verify 00:05:14.250 ************************************ 00:05:14.250 14:06:15 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.250 14:06:15 -- common/autotest_common.sh@650 -- # local es=0 00:05:14.250 14:06:15 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.250 14:06:15 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:14.250 14:06:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.250 14:06:15 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:14.250 14:06:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.250 14:06:15 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.250 14:06:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:14.250 14:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.250 14:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.250 14:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.250 14:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.250 14:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.250 14:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.250 14:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.250 14:06:15 -- accel/accel.sh@42 -- # jq -r . 00:05:14.250 [2024-12-04 14:06:15.607500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.250 [2024-12-04 14:06:15.607691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ] 00:05:14.508 [2024-12-04 14:06:15.756002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.508 [2024-12-04 14:06:15.890944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.765 [2024-12-04 14:06:16.000222] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.024 [2024-12-04 14:06:16.254277] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:15.024 00:05:15.024 Compression does not support the verify option, aborting. 00:05:15.024 14:06:16 -- common/autotest_common.sh@653 -- # es=161 00:05:15.024 14:06:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.024 14:06:16 -- common/autotest_common.sh@662 -- # es=33 00:05:15.024 ************************************ 00:05:15.024 END TEST accel_compress_verify 00:05:15.024 ************************************ 00:05:15.024 14:06:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:15.024 14:06:16 -- common/autotest_common.sh@670 -- # es=1 00:05:15.024 14:06:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.024 00:05:15.024 real 0m0.892s 00:05:15.024 user 0m0.700s 00:05:15.024 sys 0m0.110s 00:05:15.024 14:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.024 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.285 14:06:16 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:15.285 14:06:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:15.285 14:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.285 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.285 ************************************ 00:05:15.285 START TEST accel_wrong_workload 00:05:15.285 ************************************ 00:05:15.285 14:06:16 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:15.285 14:06:16 -- common/autotest_common.sh@650 -- # local es=0 00:05:15.285 14:06:16 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:15.285 14:06:16 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:15.285 14:06:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.285 14:06:16 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:15.285 14:06:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.285 14:06:16 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:15.285 14:06:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:15.285 14:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.285 14:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.285 14:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.285 14:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.285 14:06:16 -- accel/accel.sh@42 -- # jq -r . 00:05:15.285 Unsupported workload type: foobar 00:05:15.285 [2024-12-04 14:06:16.548227] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:15.285 accel_perf options: 00:05:15.285 [-h help message] 00:05:15.285 [-q queue depth per core] 00:05:15.285 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:15.285 [-T number of threads per core 00:05:15.285 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:15.285 [-t time in seconds] 00:05:15.285 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:15.285 [ dif_verify, , dif_generate, dif_generate_copy 00:05:15.285 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:15.285 [-l for compress/decompress workloads, name of uncompressed input file 00:05:15.285 [-S for crc32c workload, use this seed value (default 0) 00:05:15.285 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:15.285 [-f for fill workload, use this BYTE value (default 255) 00:05:15.285 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:15.285 [-y verify result if this switch is on] 00:05:15.285 [-a tasks to allocate per core (default: same value as -q)] 00:05:15.285 Can be used to spread operations across a wider range of memory. 00:05:15.285 14:06:16 -- common/autotest_common.sh@653 -- # es=1 00:05:15.285 14:06:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.285 14:06:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.285 14:06:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.285 00:05:15.285 real 0m0.053s 00:05:15.285 user 0m0.057s 00:05:15.285 sys 0m0.023s 00:05:15.285 14:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.285 ************************************ 00:05:15.285 END TEST accel_wrong_workload 00:05:15.285 ************************************ 00:05:15.285 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.285 14:06:16 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:15.285 14:06:16 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:15.285 14:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.285 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.285 ************************************ 00:05:15.285 START TEST accel_negative_buffers 00:05:15.285 ************************************ 00:05:15.285 14:06:16 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:15.285 14:06:16 -- common/autotest_common.sh@650 -- # local es=0 00:05:15.285 14:06:16 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:15.285 14:06:16 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:15.285 14:06:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.285 14:06:16 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:15.285 14:06:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.285 14:06:16 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:15.285 14:06:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:15.285 14:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.285 14:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.285 14:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.285 14:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.285 14:06:16 -- accel/accel.sh@42 -- # jq -r . 00:05:15.285 -x option must be non-negative. 00:05:15.285 [2024-12-04 14:06:16.653126] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:15.285 accel_perf options: 00:05:15.285 [-h help message] 00:05:15.285 [-q queue depth per core] 00:05:15.285 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:15.285 [-T number of threads per core 00:05:15.285 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:15.285 [-t time in seconds] 00:05:15.285 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:15.285 [ dif_verify, , dif_generate, dif_generate_copy 00:05:15.285 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:15.285 [-l for compress/decompress workloads, name of uncompressed input file 00:05:15.285 [-S for crc32c workload, use this seed value (default 0) 00:05:15.285 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:15.285 [-f for fill workload, use this BYTE value (default 255) 00:05:15.285 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:15.285 [-y verify result if this switch is on] 00:05:15.285 [-a tasks to allocate per core (default: same value as -q)] 00:05:15.285 Can be used to spread operations across a wider range of memory. 00:05:15.285 14:06:16 -- common/autotest_common.sh@653 -- # es=1 00:05:15.285 14:06:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.285 14:06:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:15.285 14:06:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.285 00:05:15.285 real 0m0.056s 00:05:15.285 user 0m0.054s 00:05:15.285 sys 0m0.026s 00:05:15.285 14:06:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.285 ************************************ 00:05:15.285 END TEST accel_negative_buffers 00:05:15.285 ************************************ 00:05:15.285 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.285 14:06:16 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:15.285 14:06:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:15.285 14:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.285 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:15.285 ************************************ 00:05:15.285 START TEST accel_crc32c 00:05:15.285 ************************************ 00:05:15.285 14:06:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:15.285 14:06:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.285 14:06:16 -- accel/accel.sh@17 -- # local accel_module 00:05:15.285 14:06:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:15.285 14:06:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:15.285 14:06:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.285 14:06:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.285 14:06:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.285 14:06:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.285 14:06:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.285 14:06:16 -- accel/accel.sh@42 -- # jq -r . 00:05:15.545 [2024-12-04 14:06:16.755146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.545 [2024-12-04 14:06:16.755242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58472 ] 00:05:15.546 [2024-12-04 14:06:16.903279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.804 [2024-12-04 14:06:17.038757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.182 14:06:18 -- accel/accel.sh@18 -- # out=' 00:05:17.182 SPDK Configuration: 00:05:17.182 Core mask: 0x1 00:05:17.182 00:05:17.182 Accel Perf Configuration: 00:05:17.182 Workload Type: crc32c 00:05:17.182 CRC-32C seed: 32 00:05:17.182 Transfer size: 4096 bytes 00:05:17.182 Vector count 1 00:05:17.182 Module: software 00:05:17.182 Queue depth: 32 00:05:17.182 Allocate depth: 32 00:05:17.182 # threads/core: 1 00:05:17.182 Run time: 1 seconds 00:05:17.182 Verify: Yes 00:05:17.182 00:05:17.182 Running for 1 seconds... 00:05:17.182 00:05:17.182 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:17.182 ------------------------------------------------------------------------------------ 00:05:17.182 0,0 603328/s 2356 MiB/s 0 0 00:05:17.182 ==================================================================================== 00:05:17.182 Total 603328/s 2356 MiB/s 0 0' 00:05:17.182 14:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:17.182 14:06:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:17.182 14:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:17.182 14:06:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:17.182 14:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.182 14:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.182 14:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.182 14:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.182 14:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.182 14:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.182 14:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.182 14:06:18 -- accel/accel.sh@42 -- # jq -r . 00:05:17.182 [2024-12-04 14:06:18.639766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.182 [2024-12-04 14:06:18.639969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58491 ] 00:05:17.441 [2024-12-04 14:06:18.786924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.699 [2024-12-04 14:06:18.922244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=0x1 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=crc32c 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=32 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=software 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=32 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=32 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=1 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.699 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.699 14:06:19 -- accel/accel.sh@21 -- # val=Yes 00:05:17.699 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.700 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.700 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.700 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.700 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.700 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.700 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:17.700 14:06:19 -- accel/accel.sh@21 -- # val= 00:05:17.700 14:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.700 14:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:17.700 14:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@21 -- # val= 00:05:19.075 14:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@21 -- # val= 00:05:19.075 14:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@21 -- # val= 00:05:19.075 14:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@21 -- # val= 00:05:19.075 14:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@21 -- # val= 00:05:19.075 14:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@21 -- # val= 00:05:19.075 14:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:19.075 14:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:19.075 14:06:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:19.075 14:06:20 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:19.075 ************************************ 00:05:19.075 END TEST accel_crc32c 00:05:19.075 ************************************ 00:05:19.075 14:06:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.075 00:05:19.075 real 0m3.780s 00:05:19.075 user 0m3.356s 00:05:19.075 sys 0m0.220s 00:05:19.075 14:06:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.075 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.334 14:06:20 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:19.334 14:06:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:19.334 14:06:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.334 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:19.334 ************************************ 00:05:19.334 START TEST accel_crc32c_C2 00:05:19.334 ************************************ 00:05:19.334 14:06:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:19.334 14:06:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.334 14:06:20 -- accel/accel.sh@17 -- # local accel_module 00:05:19.334 14:06:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:19.334 14:06:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:19.334 14:06:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.334 14:06:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.334 14:06:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.334 14:06:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.334 14:06:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.334 14:06:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.334 14:06:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.334 14:06:20 -- accel/accel.sh@42 -- # jq -r . 00:05:19.334 [2024-12-04 14:06:20.592387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.335 [2024-12-04 14:06:20.592490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58528 ] 00:05:19.335 [2024-12-04 14:06:20.737320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.593 [2024-12-04 14:06:20.873818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.497 14:06:22 -- accel/accel.sh@18 -- # out=' 00:05:21.497 SPDK Configuration: 00:05:21.497 Core mask: 0x1 00:05:21.497 00:05:21.497 Accel Perf Configuration: 00:05:21.497 Workload Type: crc32c 00:05:21.497 CRC-32C seed: 0 00:05:21.497 Transfer size: 4096 bytes 00:05:21.497 Vector count 2 00:05:21.497 Module: software 00:05:21.497 Queue depth: 32 00:05:21.497 Allocate depth: 32 00:05:21.497 # threads/core: 1 00:05:21.497 Run time: 1 seconds 00:05:21.497 Verify: Yes 00:05:21.497 00:05:21.497 Running for 1 seconds... 00:05:21.497 00:05:21.497 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:21.497 ------------------------------------------------------------------------------------ 00:05:21.497 0,0 508032/s 3969 MiB/s 0 0 00:05:21.497 ==================================================================================== 00:05:21.497 Total 508032/s 1984 MiB/s 0 0' 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:21.497 14:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.497 14:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.497 14:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.497 14:06:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:21.497 14:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.497 14:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.497 14:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.497 14:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.497 14:06:22 -- accel/accel.sh@42 -- # jq -r . 00:05:21.497 [2024-12-04 14:06:22.481740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.497 [2024-12-04 14:06:22.481980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58554 ] 00:05:21.497 [2024-12-04 14:06:22.628318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.497 [2024-12-04 14:06:22.765292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=0x1 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=crc32c 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=0 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=software 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=32 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=32 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.497 14:06:22 -- accel/accel.sh@21 -- # val=1 00:05:21.497 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.497 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.498 14:06:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.498 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.498 14:06:22 -- accel/accel.sh@21 -- # val=Yes 00:05:21.498 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.498 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.498 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:21.498 14:06:22 -- accel/accel.sh@21 -- # val= 00:05:21.498 14:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # IFS=: 00:05:21.498 14:06:22 -- accel/accel.sh@20 -- # read -r var val 00:05:22.874 14:06:24 -- accel/accel.sh@21 -- # val= 00:05:22.874 14:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:22.874 14:06:24 -- accel/accel.sh@21 -- # val= 00:05:22.874 14:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:22.874 14:06:24 -- accel/accel.sh@21 -- # val= 00:05:22.874 14:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:22.874 14:06:24 -- accel/accel.sh@21 -- # val= 00:05:22.874 14:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:22.874 14:06:24 -- accel/accel.sh@21 -- # val= 00:05:22.874 14:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:22.874 14:06:24 -- accel/accel.sh@21 -- # val= 00:05:22.874 14:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # IFS=: 00:05:22.874 14:06:24 -- accel/accel.sh@20 -- # read -r var val 00:05:23.134 14:06:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:23.134 14:06:24 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:23.134 14:06:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.134 00:05:23.134 real 0m3.791s 00:05:23.134 user 0m3.354s 00:05:23.134 sys 0m0.233s 00:05:23.134 14:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.134 ************************************ 00:05:23.134 END TEST accel_crc32c_C2 00:05:23.134 ************************************ 00:05:23.134 14:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.134 14:06:24 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:23.134 14:06:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:23.134 14:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.134 14:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:23.134 ************************************ 00:05:23.134 START TEST accel_copy 00:05:23.134 ************************************ 00:05:23.134 14:06:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:23.134 14:06:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.134 14:06:24 -- accel/accel.sh@17 -- # local accel_module 00:05:23.134 14:06:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:23.134 14:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:23.134 14:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.134 14:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.134 14:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.134 14:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.134 14:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.134 14:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.134 14:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.134 14:06:24 -- accel/accel.sh@42 -- # jq -r . 00:05:23.134 [2024-12-04 14:06:24.433992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.134 [2024-12-04 14:06:24.434083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:05:23.134 [2024-12-04 14:06:24.579237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.392 [2024-12-04 14:06:24.715064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.290 14:06:26 -- accel/accel.sh@18 -- # out=' 00:05:25.290 SPDK Configuration: 00:05:25.290 Core mask: 0x1 00:05:25.290 00:05:25.290 Accel Perf Configuration: 00:05:25.290 Workload Type: copy 00:05:25.290 Transfer size: 4096 bytes 00:05:25.290 Vector count 1 00:05:25.290 Module: software 00:05:25.290 Queue depth: 32 00:05:25.290 Allocate depth: 32 00:05:25.290 # threads/core: 1 00:05:25.290 Run time: 1 seconds 00:05:25.290 Verify: Yes 00:05:25.290 00:05:25.290 Running for 1 seconds... 00:05:25.290 00:05:25.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:25.290 ------------------------------------------------------------------------------------ 00:05:25.290 0,0 372448/s 1454 MiB/s 0 0 00:05:25.290 ==================================================================================== 00:05:25.290 Total 372448/s 1454 MiB/s 0 0' 00:05:25.290 14:06:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:25.290 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.290 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:25.291 14:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.291 14:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.291 14:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.291 14:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.291 14:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.291 14:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.291 14:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.291 14:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:25.291 [2024-12-04 14:06:26.332455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.291 [2024-12-04 14:06:26.332555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58621 ] 00:05:25.291 [2024-12-04 14:06:26.479824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.291 [2024-12-04 14:06:26.614399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=0x1 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=copy 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=software 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@23 -- # accel_module=software 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=32 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=32 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=1 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val=Yes 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:25.291 14:06:26 -- accel/accel.sh@21 -- # val= 00:05:25.291 14:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # IFS=: 00:05:25.291 14:06:26 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@21 -- # val= 00:05:27.192 14:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@21 -- # val= 00:05:27.192 14:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@21 -- # val= 00:05:27.192 14:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@21 -- # val= 00:05:27.192 14:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@21 -- # val= 00:05:27.192 14:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@21 -- # val= 00:05:27.192 14:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # IFS=: 00:05:27.192 14:06:28 -- accel/accel.sh@20 -- # read -r var val 00:05:27.192 14:06:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:27.192 14:06:28 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:27.192 14:06:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.192 00:05:27.192 real 0m3.788s 00:05:27.192 user 0m3.356s 00:05:27.192 sys 0m0.230s 00:05:27.192 ************************************ 00:05:27.192 END TEST accel_copy 00:05:27.192 ************************************ 00:05:27.192 14:06:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.192 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:27.192 14:06:28 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.192 14:06:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:27.192 14:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.192 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:27.192 ************************************ 00:05:27.192 START TEST accel_fill 00:05:27.192 ************************************ 00:05:27.192 14:06:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.192 14:06:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.192 14:06:28 -- accel/accel.sh@17 -- # local accel_module 00:05:27.192 14:06:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.192 14:06:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.192 14:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.192 14:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.192 14:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.192 14:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.192 14:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.192 14:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.192 14:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.192 14:06:28 -- accel/accel.sh@42 -- # jq -r . 00:05:27.192 [2024-12-04 14:06:28.275704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.192 [2024-12-04 14:06:28.275804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58662 ] 00:05:27.192 [2024-12-04 14:06:28.423718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.192 [2024-12-04 14:06:28.558253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.095 14:06:30 -- accel/accel.sh@18 -- # out=' 00:05:29.095 SPDK Configuration: 00:05:29.095 Core mask: 0x1 00:05:29.095 00:05:29.095 Accel Perf Configuration: 00:05:29.095 Workload Type: fill 00:05:29.095 Fill pattern: 0x80 00:05:29.095 Transfer size: 4096 bytes 00:05:29.095 Vector count 1 00:05:29.095 Module: software 00:05:29.095 Queue depth: 64 00:05:29.095 Allocate depth: 64 00:05:29.095 # threads/core: 1 00:05:29.095 Run time: 1 seconds 00:05:29.095 Verify: Yes 00:05:29.095 00:05:29.095 Running for 1 seconds... 00:05:29.095 00:05:29.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.095 ------------------------------------------------------------------------------------ 00:05:29.095 0,0 596736/s 2331 MiB/s 0 0 00:05:29.095 ==================================================================================== 00:05:29.095 Total 596736/s 2331 MiB/s 0 0' 00:05:29.095 14:06:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.095 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.095 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.095 14:06:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.095 14:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.095 14:06:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.095 14:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.095 14:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.095 14:06:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.095 14:06:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.095 14:06:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.095 14:06:30 -- accel/accel.sh@42 -- # jq -r . 00:05:29.096 [2024-12-04 14:06:30.167275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.096 [2024-12-04 14:06:30.167664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58688 ] 00:05:29.096 [2024-12-04 14:06:30.315488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.096 [2024-12-04 14:06:30.458889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=0x1 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=fill 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=0x80 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=software 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=64 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=64 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=1 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.355 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.355 14:06:30 -- accel/accel.sh@21 -- # val=Yes 00:05:29.355 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.356 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.356 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.356 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.356 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.356 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.356 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:29.356 14:06:30 -- accel/accel.sh@21 -- # val= 00:05:29.356 14:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.356 14:06:30 -- accel/accel.sh@20 -- # IFS=: 00:05:29.356 14:06:30 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@21 -- # val= 00:05:30.787 14:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@21 -- # val= 00:05:30.787 14:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@21 -- # val= 00:05:30.787 14:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@21 -- # val= 00:05:30.787 14:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@21 -- # val= 00:05:30.787 14:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@21 -- # val= 00:05:30.787 14:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # IFS=: 00:05:30.787 14:06:32 -- accel/accel.sh@20 -- # read -r var val 00:05:30.787 14:06:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:30.787 14:06:32 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:30.787 14:06:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.787 00:05:30.787 real 0m3.797s 00:05:30.787 user 0m1.685s 00:05:30.787 sys 0m0.119s 00:05:30.787 14:06:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:30.787 ************************************ 00:05:30.787 END TEST accel_fill 00:05:30.787 ************************************ 00:05:30.787 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:30.787 14:06:32 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:30.787 14:06:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:30.787 14:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.787 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:30.787 ************************************ 00:05:30.787 START TEST accel_copy_crc32c 00:05:30.787 ************************************ 00:05:30.787 14:06:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:30.787 14:06:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.787 14:06:32 -- accel/accel.sh@17 -- # local accel_module 00:05:30.787 14:06:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:30.787 14:06:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:30.787 14:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.787 14:06:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.787 14:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.787 14:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.787 14:06:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.787 14:06:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.787 14:06:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.787 14:06:32 -- accel/accel.sh@42 -- # jq -r . 00:05:30.787 [2024-12-04 14:06:32.127650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.787 [2024-12-04 14:06:32.127764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58729 ] 00:05:31.063 [2024-12-04 14:06:32.275418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.063 [2024-12-04 14:06:32.410270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.962 14:06:33 -- accel/accel.sh@18 -- # out=' 00:05:32.962 SPDK Configuration: 00:05:32.962 Core mask: 0x1 00:05:32.962 00:05:32.962 Accel Perf Configuration: 00:05:32.962 Workload Type: copy_crc32c 00:05:32.962 CRC-32C seed: 0 00:05:32.962 Vector size: 4096 bytes 00:05:32.962 Transfer size: 4096 bytes 00:05:32.962 Vector count 1 00:05:32.962 Module: software 00:05:32.962 Queue depth: 32 00:05:32.962 Allocate depth: 32 00:05:32.962 # threads/core: 1 00:05:32.962 Run time: 1 seconds 00:05:32.962 Verify: Yes 00:05:32.962 00:05:32.962 Running for 1 seconds... 00:05:32.962 00:05:32.962 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:32.962 ------------------------------------------------------------------------------------ 00:05:32.962 0,0 312352/s 1220 MiB/s 0 0 00:05:32.962 ==================================================================================== 00:05:32.962 Total 312352/s 1220 MiB/s 0 0' 00:05:32.962 14:06:33 -- accel/accel.sh@20 -- # IFS=: 00:05:32.962 14:06:33 -- accel/accel.sh@20 -- # read -r var val 00:05:32.962 14:06:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:32.962 14:06:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:32.962 14:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.962 14:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.962 14:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.963 14:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.963 14:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.963 14:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.963 14:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.963 14:06:33 -- accel/accel.sh@42 -- # jq -r . 00:05:32.963 [2024-12-04 14:06:34.019306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.963 [2024-12-04 14:06:34.019411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58750 ] 00:05:32.963 [2024-12-04 14:06:34.167910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.963 [2024-12-04 14:06:34.381149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=0x1 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=0 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=software 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@23 -- # accel_module=software 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=32 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=32 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=1 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.223 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.223 14:06:34 -- accel/accel.sh@21 -- # val=Yes 00:05:33.223 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.224 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.224 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.224 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.224 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.224 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.224 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:33.224 14:06:34 -- accel/accel.sh@21 -- # val= 00:05:33.224 14:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.224 14:06:34 -- accel/accel.sh@20 -- # IFS=: 00:05:33.224 14:06:34 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:35 -- accel/accel.sh@21 -- # val= 00:05:34.601 14:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.601 14:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:34.601 14:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:35 -- accel/accel.sh@21 -- # val= 00:05:34.601 14:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.601 14:06:35 -- accel/accel.sh@20 -- # IFS=: 00:05:34.601 14:06:35 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:35 -- accel/accel.sh@21 -- # val= 00:05:34.601 14:06:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:36 -- accel/accel.sh@21 -- # val= 00:05:34.601 14:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:36 -- accel/accel.sh@21 -- # val= 00:05:34.601 14:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:36 -- accel/accel.sh@21 -- # val= 00:05:34.601 14:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # IFS=: 00:05:34.601 14:06:36 -- accel/accel.sh@20 -- # read -r var val 00:05:34.601 14:06:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:34.601 14:06:36 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:34.601 14:06:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.601 00:05:34.601 real 0m3.921s 00:05:34.601 user 0m3.456s 00:05:34.601 sys 0m0.254s 00:05:34.601 14:06:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.601 ************************************ 00:05:34.601 END TEST accel_copy_crc32c 00:05:34.601 ************************************ 00:05:34.601 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.601 14:06:36 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:34.601 14:06:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:34.601 14:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.601 14:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:34.601 ************************************ 00:05:34.601 START TEST accel_copy_crc32c_C2 00:05:34.601 ************************************ 00:05:34.601 14:06:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:34.601 14:06:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.601 14:06:36 -- accel/accel.sh@17 -- # local accel_module 00:05:34.601 14:06:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:34.601 14:06:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:34.601 14:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.601 14:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.601 14:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.601 14:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.601 14:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.601 14:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.601 14:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.601 14:06:36 -- accel/accel.sh@42 -- # jq -r . 00:05:34.861 [2024-12-04 14:06:36.087672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.861 [2024-12-04 14:06:36.087774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58791 ] 00:05:34.861 [2024-12-04 14:06:36.233241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.121 [2024-12-04 14:06:36.370025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.500 14:06:37 -- accel/accel.sh@18 -- # out=' 00:05:36.500 SPDK Configuration: 00:05:36.500 Core mask: 0x1 00:05:36.500 00:05:36.500 Accel Perf Configuration: 00:05:36.500 Workload Type: copy_crc32c 00:05:36.500 CRC-32C seed: 0 00:05:36.500 Vector size: 4096 bytes 00:05:36.500 Transfer size: 8192 bytes 00:05:36.500 Vector count 2 00:05:36.500 Module: software 00:05:36.500 Queue depth: 32 00:05:36.500 Allocate depth: 32 00:05:36.500 # threads/core: 1 00:05:36.500 Run time: 1 seconds 00:05:36.500 Verify: Yes 00:05:36.500 00:05:36.500 Running for 1 seconds... 00:05:36.500 00:05:36.500 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:36.500 ------------------------------------------------------------------------------------ 00:05:36.500 0,0 233504/s 1824 MiB/s 0 0 00:05:36.500 ==================================================================================== 00:05:36.500 Total 233504/s 912 MiB/s 0 0' 00:05:36.500 14:06:37 -- accel/accel.sh@20 -- # IFS=: 00:05:36.500 14:06:37 -- accel/accel.sh@20 -- # read -r var val 00:05:36.500 14:06:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:36.500 14:06:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:36.500 14:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.500 14:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.500 14:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.500 14:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.500 14:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.500 14:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.500 14:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.500 14:06:37 -- accel/accel.sh@42 -- # jq -r . 00:05:36.760 [2024-12-04 14:06:37.979912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.761 [2024-12-04 14:06:37.980017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:05:36.761 [2024-12-04 14:06:38.124384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.020 [2024-12-04 14:06:38.260016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=0x1 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=0 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=software 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=32 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=32 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=1 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val=Yes 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:37.020 14:06:38 -- accel/accel.sh@21 -- # val= 00:05:37.020 14:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # IFS=: 00:05:37.020 14:06:38 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@21 -- # val= 00:05:38.396 14:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # IFS=: 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@21 -- # val= 00:05:38.396 14:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # IFS=: 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@21 -- # val= 00:05:38.396 14:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # IFS=: 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@21 -- # val= 00:05:38.396 14:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # IFS=: 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@21 -- # val= 00:05:38.396 14:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # IFS=: 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@21 -- # val= 00:05:38.396 14:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # IFS=: 00:05:38.396 14:06:39 -- accel/accel.sh@20 -- # read -r var val 00:05:38.396 14:06:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:38.396 14:06:39 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:38.396 14:06:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.396 00:05:38.396 real 0m3.783s 00:05:38.396 user 0m3.354s 00:05:38.396 sys 0m0.219s 00:05:38.396 14:06:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.396 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.396 ************************************ 00:05:38.396 END TEST accel_copy_crc32c_C2 00:05:38.396 ************************************ 00:05:38.656 14:06:39 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:38.656 14:06:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:38.656 14:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.656 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:38.656 ************************************ 00:05:38.656 START TEST accel_dualcast 00:05:38.656 ************************************ 00:05:38.656 14:06:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:05:38.656 14:06:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.656 14:06:39 -- accel/accel.sh@17 -- # local accel_module 00:05:38.656 14:06:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:38.656 14:06:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:38.656 14:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.656 14:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.656 14:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.656 14:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.656 14:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.656 14:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.656 14:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.656 14:06:39 -- accel/accel.sh@42 -- # jq -r . 00:05:38.656 [2024-12-04 14:06:39.921734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.656 [2024-12-04 14:06:39.922134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58852 ] 00:05:38.656 [2024-12-04 14:06:40.063282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.915 [2024-12-04 14:06:40.201142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.819 14:06:41 -- accel/accel.sh@18 -- # out=' 00:05:40.819 SPDK Configuration: 00:05:40.819 Core mask: 0x1 00:05:40.819 00:05:40.819 Accel Perf Configuration: 00:05:40.819 Workload Type: dualcast 00:05:40.819 Transfer size: 4096 bytes 00:05:40.819 Vector count 1 00:05:40.819 Module: software 00:05:40.819 Queue depth: 32 00:05:40.819 Allocate depth: 32 00:05:40.819 # threads/core: 1 00:05:40.819 Run time: 1 seconds 00:05:40.819 Verify: Yes 00:05:40.819 00:05:40.819 Running for 1 seconds... 00:05:40.819 00:05:40.819 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.819 ------------------------------------------------------------------------------------ 00:05:40.819 0,0 440448/s 1720 MiB/s 0 0 00:05:40.819 ==================================================================================== 00:05:40.819 Total 440448/s 1720 MiB/s 0 0' 00:05:40.819 14:06:41 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:41 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:40.819 14:06:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:40.819 14:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.819 14:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.819 14:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.819 14:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.819 14:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.819 14:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.819 14:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.819 14:06:41 -- accel/accel.sh@42 -- # jq -r . 00:05:40.819 [2024-12-04 14:06:41.807128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.819 [2024-12-04 14:06:41.807231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:05:40.819 [2024-12-04 14:06:41.955142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.819 [2024-12-04 14:06:42.090918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val=0x1 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val=dualcast 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.819 14:06:42 -- accel/accel.sh@21 -- # val=software 00:05:40.819 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.819 14:06:42 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.819 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val=32 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val=32 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val=1 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val=Yes 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:40.820 14:06:42 -- accel/accel.sh@21 -- # val= 00:05:40.820 14:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # IFS=: 00:05:40.820 14:06:42 -- accel/accel.sh@20 -- # read -r var val 00:05:42.200 14:06:43 -- accel/accel.sh@21 -- # val= 00:05:42.200 14:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # IFS=: 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # read -r var val 00:05:42.200 14:06:43 -- accel/accel.sh@21 -- # val= 00:05:42.200 14:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # IFS=: 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # read -r var val 00:05:42.200 14:06:43 -- accel/accel.sh@21 -- # val= 00:05:42.200 14:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # IFS=: 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # read -r var val 00:05:42.200 14:06:43 -- accel/accel.sh@21 -- # val= 00:05:42.200 14:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # IFS=: 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # read -r var val 00:05:42.200 14:06:43 -- accel/accel.sh@21 -- # val= 00:05:42.200 14:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # IFS=: 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # read -r var val 00:05:42.200 14:06:43 -- accel/accel.sh@21 -- # val= 00:05:42.200 14:06:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # IFS=: 00:05:42.200 14:06:43 -- accel/accel.sh@20 -- # read -r var val 00:05:42.460 14:06:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:42.460 14:06:43 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:42.460 14:06:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.460 00:05:42.460 real 0m3.783s 00:05:42.460 user 0m3.359s 00:05:42.460 sys 0m0.222s 00:05:42.460 14:06:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.460 ************************************ 00:05:42.460 END TEST accel_dualcast 00:05:42.460 ************************************ 00:05:42.460 14:06:43 -- common/autotest_common.sh@10 -- # set +x 00:05:42.460 14:06:43 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:42.460 14:06:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:42.460 14:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.460 14:06:43 -- common/autotest_common.sh@10 -- # set +x 00:05:42.460 ************************************ 00:05:42.460 START TEST accel_compare 00:05:42.460 ************************************ 00:05:42.460 14:06:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:42.460 14:06:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.460 14:06:43 -- accel/accel.sh@17 -- # local accel_module 00:05:42.460 14:06:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:42.460 14:06:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:42.460 14:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.460 14:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.460 14:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.460 14:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.460 14:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.460 14:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.460 14:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.460 14:06:43 -- accel/accel.sh@42 -- # jq -r . 00:05:42.460 [2024-12-04 14:06:43.768049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.460 [2024-12-04 14:06:43.768164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:05:42.460 [2024-12-04 14:06:43.910732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.720 [2024-12-04 14:06:44.046288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.624 14:06:45 -- accel/accel.sh@18 -- # out=' 00:05:44.624 SPDK Configuration: 00:05:44.624 Core mask: 0x1 00:05:44.624 00:05:44.624 Accel Perf Configuration: 00:05:44.624 Workload Type: compare 00:05:44.624 Transfer size: 4096 bytes 00:05:44.624 Vector count 1 00:05:44.624 Module: software 00:05:44.624 Queue depth: 32 00:05:44.624 Allocate depth: 32 00:05:44.624 # threads/core: 1 00:05:44.624 Run time: 1 seconds 00:05:44.624 Verify: Yes 00:05:44.624 00:05:44.624 Running for 1 seconds... 00:05:44.624 00:05:44.624 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:44.624 ------------------------------------------------------------------------------------ 00:05:44.624 0,0 562400/s 2196 MiB/s 0 0 00:05:44.624 ==================================================================================== 00:05:44.624 Total 562400/s 2196 MiB/s 0 0' 00:05:44.624 14:06:45 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:45 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:44.624 14:06:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:44.624 14:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.624 14:06:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.624 14:06:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.624 14:06:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.624 14:06:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.624 14:06:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.624 14:06:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.624 14:06:45 -- accel/accel.sh@42 -- # jq -r . 00:05:44.624 [2024-12-04 14:06:45.663531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.624 [2024-12-04 14:06:45.663640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:05:44.624 [2024-12-04 14:06:45.808833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.624 [2024-12-04 14:06:45.943325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=0x1 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=compare 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=software 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=32 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=32 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=1 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val=Yes 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:44.624 14:06:46 -- accel/accel.sh@21 -- # val= 00:05:44.624 14:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.624 14:06:46 -- accel/accel.sh@20 -- # IFS=: 00:05:44.625 14:06:46 -- accel/accel.sh@20 -- # read -r var val 00:05:46.529 14:06:47 -- accel/accel.sh@21 -- # val= 00:05:46.530 14:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # IFS=: 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # read -r var val 00:05:46.530 14:06:47 -- accel/accel.sh@21 -- # val= 00:05:46.530 14:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # IFS=: 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # read -r var val 00:05:46.530 14:06:47 -- accel/accel.sh@21 -- # val= 00:05:46.530 14:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # IFS=: 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # read -r var val 00:05:46.530 14:06:47 -- accel/accel.sh@21 -- # val= 00:05:46.530 14:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # IFS=: 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # read -r var val 00:05:46.530 14:06:47 -- accel/accel.sh@21 -- # val= 00:05:46.530 14:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # IFS=: 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # read -r var val 00:05:46.530 14:06:47 -- accel/accel.sh@21 -- # val= 00:05:46.530 14:06:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # IFS=: 00:05:46.530 14:06:47 -- accel/accel.sh@20 -- # read -r var val 00:05:46.530 14:06:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.530 14:06:47 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:46.530 14:06:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.530 ************************************ 00:05:46.530 END TEST accel_compare 00:05:46.530 ************************************ 00:05:46.530 00:05:46.530 real 0m3.786s 00:05:46.530 user 0m3.346s 00:05:46.530 sys 0m0.224s 00:05:46.530 14:06:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.530 14:06:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.530 14:06:47 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:46.530 14:06:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:46.530 14:06:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.530 14:06:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.530 ************************************ 00:05:46.530 START TEST accel_xor 00:05:46.530 ************************************ 00:05:46.530 14:06:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:46.530 14:06:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.530 14:06:47 -- accel/accel.sh@17 -- # local accel_module 00:05:46.530 14:06:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:46.530 14:06:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:46.530 14:06:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.530 14:06:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.530 14:06:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.530 14:06:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.530 14:06:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.530 14:06:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.530 14:06:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.530 14:06:47 -- accel/accel.sh@42 -- # jq -r . 00:05:46.530 [2024-12-04 14:06:47.612653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.530 [2024-12-04 14:06:47.612755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58981 ] 00:05:46.530 [2024-12-04 14:06:47.759269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.530 [2024-12-04 14:06:47.894832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.434 14:06:49 -- accel/accel.sh@18 -- # out=' 00:05:48.434 SPDK Configuration: 00:05:48.434 Core mask: 0x1 00:05:48.434 00:05:48.434 Accel Perf Configuration: 00:05:48.434 Workload Type: xor 00:05:48.434 Source buffers: 2 00:05:48.434 Transfer size: 4096 bytes 00:05:48.434 Vector count 1 00:05:48.434 Module: software 00:05:48.434 Queue depth: 32 00:05:48.434 Allocate depth: 32 00:05:48.434 # threads/core: 1 00:05:48.434 Run time: 1 seconds 00:05:48.434 Verify: Yes 00:05:48.434 00:05:48.434 Running for 1 seconds... 00:05:48.434 00:05:48.434 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.434 ------------------------------------------------------------------------------------ 00:05:48.434 0,0 446528/s 1744 MiB/s 0 0 00:05:48.434 ==================================================================================== 00:05:48.434 Total 446528/s 1744 MiB/s 0 0' 00:05:48.434 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.434 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.434 14:06:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:48.434 14:06:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:48.434 14:06:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.434 14:06:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.434 14:06:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.434 14:06:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.434 14:06:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.434 14:06:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.434 14:06:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.434 14:06:49 -- accel/accel.sh@42 -- # jq -r . 00:05:48.434 [2024-12-04 14:06:49.495610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.434 [2024-12-04 14:06:49.495718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:05:48.434 [2024-12-04 14:06:49.645499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.434 [2024-12-04 14:06:49.820988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=0x1 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=xor 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=2 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=software 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=32 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=32 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=1 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val=Yes 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:48.696 14:06:49 -- accel/accel.sh@21 -- # val= 00:05:48.696 14:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # IFS=: 00:05:48.696 14:06:49 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@21 -- # val= 00:05:50.084 14:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # IFS=: 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@21 -- # val= 00:05:50.084 14:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # IFS=: 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@21 -- # val= 00:05:50.084 14:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # IFS=: 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@21 -- # val= 00:05:50.084 14:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # IFS=: 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@21 -- # val= 00:05:50.084 14:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # IFS=: 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@21 -- # val= 00:05:50.084 14:06:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # IFS=: 00:05:50.084 14:06:51 -- accel/accel.sh@20 -- # read -r var val 00:05:50.084 14:06:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.084 14:06:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:50.084 14:06:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.084 00:05:50.084 real 0m3.914s 00:05:50.084 user 0m3.471s 00:05:50.084 sys 0m0.232s 00:05:50.084 14:06:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.084 ************************************ 00:05:50.084 END TEST accel_xor 00:05:50.084 14:06:51 -- common/autotest_common.sh@10 -- # set +x 00:05:50.084 ************************************ 00:05:50.084 14:06:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:50.084 14:06:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:50.084 14:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.084 14:06:51 -- common/autotest_common.sh@10 -- # set +x 00:05:50.084 ************************************ 00:05:50.084 START TEST accel_xor 00:05:50.084 ************************************ 00:05:50.084 14:06:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:05:50.084 14:06:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.084 14:06:51 -- accel/accel.sh@17 -- # local accel_module 00:05:50.084 14:06:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:50.084 14:06:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:50.084 14:06:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.084 14:06:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.084 14:06:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.084 14:06:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.084 14:06:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.084 14:06:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.084 14:06:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.084 14:06:51 -- accel/accel.sh@42 -- # jq -r . 00:05:50.343 [2024-12-04 14:06:51.574050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.343 [2024-12-04 14:06:51.574173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59050 ] 00:05:50.343 [2024-12-04 14:06:51.719497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.601 [2024-12-04 14:06:51.856662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.979 14:06:53 -- accel/accel.sh@18 -- # out=' 00:05:51.979 SPDK Configuration: 00:05:51.979 Core mask: 0x1 00:05:51.979 00:05:51.979 Accel Perf Configuration: 00:05:51.979 Workload Type: xor 00:05:51.979 Source buffers: 3 00:05:51.979 Transfer size: 4096 bytes 00:05:51.979 Vector count 1 00:05:51.979 Module: software 00:05:51.979 Queue depth: 32 00:05:51.979 Allocate depth: 32 00:05:51.979 # threads/core: 1 00:05:51.979 Run time: 1 seconds 00:05:51.979 Verify: Yes 00:05:51.979 00:05:51.979 Running for 1 seconds... 00:05:51.979 00:05:51.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.979 ------------------------------------------------------------------------------------ 00:05:51.979 0,0 425600/s 1662 MiB/s 0 0 00:05:51.979 ==================================================================================== 00:05:51.980 Total 425600/s 1662 MiB/s 0 0' 00:05:51.980 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:51.980 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:51.980 14:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:51.980 14:06:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:51.980 14:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.980 14:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.980 14:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.980 14:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.980 14:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.980 14:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.980 14:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.980 14:06:53 -- accel/accel.sh@42 -- # jq -r . 00:05:52.245 [2024-12-04 14:06:53.467648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.245 [2024-12-04 14:06:53.467754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59076 ] 00:05:52.245 [2024-12-04 14:06:53.615999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.502 [2024-12-04 14:06:53.752330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=0x1 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=xor 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=3 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=software 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=32 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=32 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=1 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val=Yes 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:52.502 14:06:53 -- accel/accel.sh@21 -- # val= 00:05:52.502 14:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # IFS=: 00:05:52.502 14:06:53 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@21 -- # val= 00:05:53.905 14:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # IFS=: 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@21 -- # val= 00:05:53.905 14:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # IFS=: 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@21 -- # val= 00:05:53.905 14:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # IFS=: 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@21 -- # val= 00:05:53.905 14:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # IFS=: 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@21 -- # val= 00:05:53.905 14:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # IFS=: 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@21 -- # val= 00:05:53.905 14:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # IFS=: 00:05:53.905 14:06:55 -- accel/accel.sh@20 -- # read -r var val 00:05:53.905 14:06:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:53.905 14:06:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:53.905 14:06:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.905 00:05:53.905 real 0m3.796s 00:05:53.905 user 0m3.365s 00:05:53.905 sys 0m0.222s 00:05:53.905 14:06:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.905 14:06:55 -- common/autotest_common.sh@10 -- # set +x 00:05:53.905 ************************************ 00:05:53.905 END TEST accel_xor 00:05:53.905 ************************************ 00:05:54.166 14:06:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:54.166 14:06:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:54.166 14:06:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.166 14:06:55 -- common/autotest_common.sh@10 -- # set +x 00:05:54.166 ************************************ 00:05:54.166 START TEST accel_dif_verify 00:05:54.166 ************************************ 00:05:54.166 14:06:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:05:54.166 14:06:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.166 14:06:55 -- accel/accel.sh@17 -- # local accel_module 00:05:54.166 14:06:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:54.166 14:06:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:54.166 14:06:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.166 14:06:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.166 14:06:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.166 14:06:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.166 14:06:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.166 14:06:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.166 14:06:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.166 14:06:55 -- accel/accel.sh@42 -- # jq -r . 00:05:54.166 [2024-12-04 14:06:55.417573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.166 [2024-12-04 14:06:55.417654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59117 ] 00:05:54.166 [2024-12-04 14:06:55.561371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.427 [2024-12-04 14:06:55.754031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.343 14:06:57 -- accel/accel.sh@18 -- # out=' 00:05:56.343 SPDK Configuration: 00:05:56.343 Core mask: 0x1 00:05:56.343 00:05:56.343 Accel Perf Configuration: 00:05:56.343 Workload Type: dif_verify 00:05:56.343 Vector size: 4096 bytes 00:05:56.343 Transfer size: 4096 bytes 00:05:56.343 Block size: 512 bytes 00:05:56.343 Metadata size: 8 bytes 00:05:56.343 Vector count 1 00:05:56.343 Module: software 00:05:56.343 Queue depth: 32 00:05:56.343 Allocate depth: 32 00:05:56.343 # threads/core: 1 00:05:56.343 Run time: 1 seconds 00:05:56.343 Verify: No 00:05:56.343 00:05:56.343 Running for 1 seconds... 00:05:56.343 00:05:56.343 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.343 ------------------------------------------------------------------------------------ 00:05:56.343 0,0 98016/s 388 MiB/s 0 0 00:05:56.343 ==================================================================================== 00:05:56.343 Total 98016/s 382 MiB/s 0 0' 00:05:56.343 14:06:57 -- accel/accel.sh@20 -- # IFS=: 00:05:56.343 14:06:57 -- accel/accel.sh@20 -- # read -r var val 00:05:56.343 14:06:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:56.343 14:06:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:56.343 14:06:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.343 14:06:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.343 14:06:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.343 14:06:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.343 14:06:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.343 14:06:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.343 14:06:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.343 14:06:57 -- accel/accel.sh@42 -- # jq -r . 00:05:56.343 [2024-12-04 14:06:57.608967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.343 [2024-12-04 14:06:57.609077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:05:56.343 [2024-12-04 14:06:57.748785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.602 [2024-12-04 14:06:57.896510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=0x1 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=dif_verify 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=software 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=32 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=32 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=1 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val=No 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:56.602 14:06:58 -- accel/accel.sh@21 -- # val= 00:05:56.602 14:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # IFS=: 00:05:56.602 14:06:58 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@21 -- # val= 00:05:58.506 14:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # IFS=: 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@21 -- # val= 00:05:58.506 14:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # IFS=: 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@21 -- # val= 00:05:58.506 14:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # IFS=: 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@21 -- # val= 00:05:58.506 14:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # IFS=: 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@21 -- # val= 00:05:58.506 14:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # IFS=: 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@21 -- # val= 00:05:58.506 14:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # IFS=: 00:05:58.506 14:06:59 -- accel/accel.sh@20 -- # read -r var val 00:05:58.506 14:06:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.506 14:06:59 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:58.506 14:06:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.506 00:05:58.506 real 0m4.092s 00:05:58.506 user 0m3.634s 00:05:58.506 sys 0m0.249s 00:05:58.506 14:06:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.506 14:06:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.506 ************************************ 00:05:58.506 END TEST accel_dif_verify 00:05:58.506 ************************************ 00:05:58.506 14:06:59 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:58.506 14:06:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:58.506 14:06:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.506 14:06:59 -- common/autotest_common.sh@10 -- # set +x 00:05:58.506 ************************************ 00:05:58.506 START TEST accel_dif_generate 00:05:58.506 ************************************ 00:05:58.506 14:06:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:05:58.506 14:06:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.506 14:06:59 -- accel/accel.sh@17 -- # local accel_module 00:05:58.506 14:06:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:58.506 14:06:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:58.506 14:06:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.506 14:06:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.506 14:06:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.506 14:06:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.506 14:06:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.506 14:06:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.506 14:06:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.506 14:06:59 -- accel/accel.sh@42 -- # jq -r . 00:05:58.506 [2024-12-04 14:06:59.578666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.506 [2024-12-04 14:06:59.578774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59186 ] 00:05:58.506 [2024-12-04 14:06:59.727124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.506 [2024-12-04 14:06:59.873544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.406 14:07:01 -- accel/accel.sh@18 -- # out=' 00:06:00.406 SPDK Configuration: 00:06:00.406 Core mask: 0x1 00:06:00.406 00:06:00.406 Accel Perf Configuration: 00:06:00.406 Workload Type: dif_generate 00:06:00.406 Vector size: 4096 bytes 00:06:00.406 Transfer size: 4096 bytes 00:06:00.406 Block size: 512 bytes 00:06:00.406 Metadata size: 8 bytes 00:06:00.406 Vector count 1 00:06:00.406 Module: software 00:06:00.406 Queue depth: 32 00:06:00.406 Allocate depth: 32 00:06:00.406 # threads/core: 1 00:06:00.406 Run time: 1 seconds 00:06:00.406 Verify: No 00:06:00.406 00:06:00.406 Running for 1 seconds... 00:06:00.406 00:06:00.406 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.406 ------------------------------------------------------------------------------------ 00:06:00.406 0,0 153472/s 608 MiB/s 0 0 00:06:00.406 ==================================================================================== 00:06:00.406 Total 153472/s 599 MiB/s 0 0' 00:06:00.406 14:07:01 -- accel/accel.sh@20 -- # IFS=: 00:06:00.406 14:07:01 -- accel/accel.sh@20 -- # read -r var val 00:06:00.406 14:07:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:00.406 14:07:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:00.406 14:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.406 14:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.406 14:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.406 14:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.406 14:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.406 14:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.406 14:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.406 14:07:01 -- accel/accel.sh@42 -- # jq -r . 00:06:00.406 [2024-12-04 14:07:01.480151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.406 [2024-12-04 14:07:01.480361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:06:00.406 [2024-12-04 14:07:01.626897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.406 [2024-12-04 14:07:01.840514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=0x1 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=dif_generate 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=software 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=32 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=32 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=1 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val=No 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:00.668 14:07:02 -- accel/accel.sh@21 -- # val= 00:06:00.668 14:07:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # IFS=: 00:06:00.668 14:07:02 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@21 -- # val= 00:06:02.050 14:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@21 -- # val= 00:06:02.050 14:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@21 -- # val= 00:06:02.050 14:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@21 -- # val= 00:06:02.050 14:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@21 -- # val= 00:06:02.050 14:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@21 -- # val= 00:06:02.050 14:07:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # IFS=: 00:06:02.050 14:07:03 -- accel/accel.sh@20 -- # read -r var val 00:06:02.050 14:07:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.050 ************************************ 00:06:02.050 END TEST accel_dif_generate 00:06:02.050 ************************************ 00:06:02.050 14:07:03 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:02.050 14:07:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.050 00:06:02.050 real 0m3.938s 00:06:02.050 user 0m3.451s 00:06:02.050 sys 0m0.283s 00:06:02.050 14:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.050 14:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.310 14:07:03 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:02.310 14:07:03 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:02.310 14:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.310 14:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:02.310 ************************************ 00:06:02.310 START TEST accel_dif_generate_copy 00:06:02.310 ************************************ 00:06:02.310 14:07:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:02.310 14:07:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.310 14:07:03 -- accel/accel.sh@17 -- # local accel_module 00:06:02.310 14:07:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:02.310 14:07:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:02.310 14:07:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.310 14:07:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.310 14:07:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.310 14:07:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.310 14:07:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.310 14:07:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.310 14:07:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.310 14:07:03 -- accel/accel.sh@42 -- # jq -r . 00:06:02.310 [2024-12-04 14:07:03.559539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.310 [2024-12-04 14:07:03.559641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:06:02.310 [2024-12-04 14:07:03.707457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.569 [2024-12-04 14:07:03.844068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.471 14:07:05 -- accel/accel.sh@18 -- # out=' 00:06:04.471 SPDK Configuration: 00:06:04.471 Core mask: 0x1 00:06:04.471 00:06:04.471 Accel Perf Configuration: 00:06:04.471 Workload Type: dif_generate_copy 00:06:04.471 Vector size: 4096 bytes 00:06:04.471 Transfer size: 4096 bytes 00:06:04.471 Vector count 1 00:06:04.471 Module: software 00:06:04.471 Queue depth: 32 00:06:04.471 Allocate depth: 32 00:06:04.471 # threads/core: 1 00:06:04.471 Run time: 1 seconds 00:06:04.471 Verify: No 00:06:04.471 00:06:04.471 Running for 1 seconds... 00:06:04.471 00:06:04.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.471 ------------------------------------------------------------------------------------ 00:06:04.471 0,0 117344/s 465 MiB/s 0 0 00:06:04.471 ==================================================================================== 00:06:04.471 Total 117344/s 458 MiB/s 0 0' 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:04.471 14:07:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:04.471 14:07:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.471 14:07:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.471 14:07:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.471 14:07:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.471 14:07:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.471 14:07:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.471 14:07:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.471 14:07:05 -- accel/accel.sh@42 -- # jq -r . 00:06:04.471 [2024-12-04 14:07:05.449895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.471 [2024-12-04 14:07:05.450060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:06:04.471 [2024-12-04 14:07:05.588726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.471 [2024-12-04 14:07:05.725349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=0x1 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=software 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=32 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=32 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=1 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val=No 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:04.471 14:07:05 -- accel/accel.sh@21 -- # val= 00:06:04.471 14:07:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # IFS=: 00:06:04.471 14:07:05 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@21 -- # val= 00:06:05.849 14:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@21 -- # val= 00:06:05.849 14:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@21 -- # val= 00:06:05.849 14:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@21 -- # val= 00:06:05.849 14:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@21 -- # val= 00:06:05.849 14:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@21 -- # val= 00:06:05.849 14:07:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # IFS=: 00:06:05.849 14:07:07 -- accel/accel.sh@20 -- # read -r var val 00:06:05.849 14:07:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.849 14:07:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:05.849 14:07:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.849 00:06:05.849 real 0m3.785s 00:06:05.849 user 0m3.358s 00:06:05.850 sys 0m0.225s 00:06:05.850 14:07:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.850 ************************************ 00:06:05.850 14:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:05.850 END TEST accel_dif_generate_copy 00:06:05.850 ************************************ 00:06:06.112 14:07:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:06.112 14:07:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.112 14:07:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:06.112 14:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.112 14:07:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.112 ************************************ 00:06:06.112 START TEST accel_comp 00:06:06.112 ************************************ 00:06:06.112 14:07:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.112 14:07:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.112 14:07:07 -- accel/accel.sh@17 -- # local accel_module 00:06:06.112 14:07:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.112 14:07:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.112 14:07:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.112 14:07:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.112 14:07:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.112 14:07:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.112 14:07:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.112 14:07:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.112 14:07:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.112 14:07:07 -- accel/accel.sh@42 -- # jq -r . 00:06:06.112 [2024-12-04 14:07:07.411333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.112 [2024-12-04 14:07:07.411441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:06:06.112 [2024-12-04 14:07:07.553022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.374 [2024-12-04 14:07:07.769411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.285 14:07:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:08.285 00:06:08.285 SPDK Configuration: 00:06:08.285 Core mask: 0x1 00:06:08.285 00:06:08.285 Accel Perf Configuration: 00:06:08.285 Workload Type: compress 00:06:08.285 Transfer size: 4096 bytes 00:06:08.285 Vector count 1 00:06:08.285 Module: software 00:06:08.285 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.285 Queue depth: 32 00:06:08.285 Allocate depth: 32 00:06:08.285 # threads/core: 1 00:06:08.285 Run time: 1 seconds 00:06:08.285 Verify: No 00:06:08.285 00:06:08.285 Running for 1 seconds... 00:06:08.285 00:06:08.285 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.285 ------------------------------------------------------------------------------------ 00:06:08.285 0,0 52704/s 219 MiB/s 0 0 00:06:08.285 ==================================================================================== 00:06:08.285 Total 52704/s 205 MiB/s 0 0' 00:06:08.285 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.285 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.285 14:07:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.285 14:07:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.285 14:07:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.285 14:07:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.285 14:07:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.286 14:07:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.286 14:07:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.286 14:07:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.286 14:07:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.286 14:07:09 -- accel/accel.sh@42 -- # jq -r . 00:06:08.286 [2024-12-04 14:07:09.449798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.286 [2024-12-04 14:07:09.450020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59335 ] 00:06:08.286 [2024-12-04 14:07:09.590968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.286 [2024-12-04 14:07:09.728276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val=0x1 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val=compress 00:06:08.544 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.544 14:07:09 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.544 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.544 14:07:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val=software 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val=32 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val=32 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val=1 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val=No 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:08.545 14:07:09 -- accel/accel.sh@21 -- # val= 00:06:08.545 14:07:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # IFS=: 00:06:08.545 14:07:09 -- accel/accel.sh@20 -- # read -r var val 00:06:09.951 14:07:11 -- accel/accel.sh@21 -- # val= 00:06:09.951 14:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:09.951 14:07:11 -- accel/accel.sh@21 -- # val= 00:06:09.951 14:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:09.951 14:07:11 -- accel/accel.sh@21 -- # val= 00:06:09.951 14:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:09.951 14:07:11 -- accel/accel.sh@21 -- # val= 00:06:09.951 14:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:09.951 14:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:09.951 14:07:11 -- accel/accel.sh@21 -- # val= 00:06:09.952 14:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.952 14:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:09.952 14:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:09.952 14:07:11 -- accel/accel.sh@21 -- # val= 00:06:09.952 14:07:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.952 14:07:11 -- accel/accel.sh@20 -- # IFS=: 00:06:09.952 14:07:11 -- accel/accel.sh@20 -- # read -r var val 00:06:09.952 14:07:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.952 14:07:11 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:09.952 14:07:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.952 00:06:09.952 real 0m3.921s 00:06:09.952 user 0m3.448s 00:06:09.952 sys 0m0.265s 00:06:09.952 14:07:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.952 14:07:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.952 ************************************ 00:06:09.952 END TEST accel_comp 00:06:09.952 ************************************ 00:06:09.952 14:07:11 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.952 14:07:11 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:09.952 14:07:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.952 14:07:11 -- common/autotest_common.sh@10 -- # set +x 00:06:09.952 ************************************ 00:06:09.952 START TEST accel_decomp 00:06:09.952 ************************************ 00:06:09.952 14:07:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.952 14:07:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.952 14:07:11 -- accel/accel.sh@17 -- # local accel_module 00:06:09.952 14:07:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.952 14:07:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:09.952 14:07:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.952 14:07:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.952 14:07:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.952 14:07:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.952 14:07:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.952 14:07:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.952 14:07:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.952 14:07:11 -- accel/accel.sh@42 -- # jq -r . 00:06:09.952 [2024-12-04 14:07:11.378656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.952 [2024-12-04 14:07:11.378764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59376 ] 00:06:10.211 [2024-12-04 14:07:11.527848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.211 [2024-12-04 14:07:11.673278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.115 14:07:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:12.115 00:06:12.115 SPDK Configuration: 00:06:12.115 Core mask: 0x1 00:06:12.115 00:06:12.115 Accel Perf Configuration: 00:06:12.115 Workload Type: decompress 00:06:12.115 Transfer size: 4096 bytes 00:06:12.115 Vector count 1 00:06:12.115 Module: software 00:06:12.115 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.115 Queue depth: 32 00:06:12.115 Allocate depth: 32 00:06:12.115 # threads/core: 1 00:06:12.115 Run time: 1 seconds 00:06:12.115 Verify: Yes 00:06:12.115 00:06:12.115 Running for 1 seconds... 00:06:12.115 00:06:12.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.115 ------------------------------------------------------------------------------------ 00:06:12.115 0,0 81664/s 150 MiB/s 0 0 00:06:12.115 ==================================================================================== 00:06:12.115 Total 81664/s 319 MiB/s 0 0' 00:06:12.115 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.115 14:07:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.115 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.115 14:07:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.115 14:07:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.115 14:07:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.115 14:07:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.115 14:07:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.115 14:07:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.115 14:07:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.115 14:07:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.115 14:07:13 -- accel/accel.sh@42 -- # jq -r . 00:06:12.115 [2024-12-04 14:07:13.285997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.115 [2024-12-04 14:07:13.286225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59402 ] 00:06:12.115 [2024-12-04 14:07:13.435322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.115 [2024-12-04 14:07:13.576356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=0x1 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=decompress 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=software 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=32 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=32 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=1 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val=Yes 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:12.374 14:07:13 -- accel/accel.sh@21 -- # val= 00:06:12.374 14:07:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # IFS=: 00:06:12.374 14:07:13 -- accel/accel.sh@20 -- # read -r var val 00:06:13.752 14:07:15 -- accel/accel.sh@21 -- # val= 00:06:13.752 14:07:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # IFS=: 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.752 14:07:15 -- accel/accel.sh@21 -- # val= 00:06:13.752 14:07:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # IFS=: 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.752 14:07:15 -- accel/accel.sh@21 -- # val= 00:06:13.752 14:07:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # IFS=: 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.752 14:07:15 -- accel/accel.sh@21 -- # val= 00:06:13.752 14:07:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # IFS=: 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.752 14:07:15 -- accel/accel.sh@21 -- # val= 00:06:13.752 14:07:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # IFS=: 00:06:13.752 14:07:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.753 14:07:15 -- accel/accel.sh@21 -- # val= 00:06:13.753 14:07:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.753 14:07:15 -- accel/accel.sh@20 -- # IFS=: 00:06:13.753 14:07:15 -- accel/accel.sh@20 -- # read -r var val 00:06:13.753 ************************************ 00:06:13.753 END TEST accel_decomp 00:06:13.753 ************************************ 00:06:13.753 14:07:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.753 14:07:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:13.753 14:07:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.753 00:06:13.753 real 0m3.815s 00:06:13.753 user 0m3.392s 00:06:13.753 sys 0m0.217s 00:06:13.753 14:07:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.753 14:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.753 14:07:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:13.753 14:07:15 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:13.753 14:07:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.753 14:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:14.011 ************************************ 00:06:14.011 START TEST accel_decmop_full 00:06:14.011 ************************************ 00:06:14.011 14:07:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.011 14:07:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.011 14:07:15 -- accel/accel.sh@17 -- # local accel_module 00:06:14.011 14:07:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.011 14:07:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:14.011 14:07:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.011 14:07:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.011 14:07:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.011 14:07:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.011 14:07:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.011 14:07:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.011 14:07:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.011 14:07:15 -- accel/accel.sh@42 -- # jq -r . 00:06:14.011 [2024-12-04 14:07:15.249668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.011 [2024-12-04 14:07:15.249751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:06:14.011 [2024-12-04 14:07:15.388849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.269 [2024-12-04 14:07:15.524053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.645 14:07:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:15.645 00:06:15.645 SPDK Configuration: 00:06:15.645 Core mask: 0x1 00:06:15.645 00:06:15.646 Accel Perf Configuration: 00:06:15.646 Workload Type: decompress 00:06:15.646 Transfer size: 111250 bytes 00:06:15.646 Vector count 1 00:06:15.646 Module: software 00:06:15.646 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.646 Queue depth: 32 00:06:15.646 Allocate depth: 32 00:06:15.646 # threads/core: 1 00:06:15.646 Run time: 1 seconds 00:06:15.646 Verify: Yes 00:06:15.646 00:06:15.646 Running for 1 seconds... 00:06:15.646 00:06:15.646 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.646 ------------------------------------------------------------------------------------ 00:06:15.646 0,0 5664/s 233 MiB/s 0 0 00:06:15.646 ==================================================================================== 00:06:15.646 Total 5664/s 600 MiB/s 0 0' 00:06:15.646 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:15.904 14:07:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.904 14:07:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:15.904 14:07:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.904 14:07:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.904 14:07:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.904 14:07:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.904 14:07:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.904 14:07:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.904 14:07:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.904 14:07:17 -- accel/accel.sh@42 -- # jq -r . 00:06:15.904 [2024-12-04 14:07:17.143854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.904 [2024-12-04 14:07:17.144076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59470 ] 00:06:15.904 [2024-12-04 14:07:17.291155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.176 [2024-12-04 14:07:17.426895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.176 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.176 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.176 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=0x1 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=decompress 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=software 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=32 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=32 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val=1 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.177 14:07:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.177 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.177 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.178 14:07:17 -- accel/accel.sh@21 -- # val=Yes 00:06:16.178 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.178 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.178 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:16.178 14:07:17 -- accel/accel.sh@21 -- # val= 00:06:16.178 14:07:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # IFS=: 00:06:16.178 14:07:17 -- accel/accel.sh@20 -- # read -r var val 00:06:17.554 14:07:19 -- accel/accel.sh@21 -- # val= 00:06:17.554 14:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # IFS=: 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # read -r var val 00:06:17.554 14:07:19 -- accel/accel.sh@21 -- # val= 00:06:17.554 14:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # IFS=: 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # read -r var val 00:06:17.554 14:07:19 -- accel/accel.sh@21 -- # val= 00:06:17.554 14:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # IFS=: 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # read -r var val 00:06:17.554 14:07:19 -- accel/accel.sh@21 -- # val= 00:06:17.554 14:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # IFS=: 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # read -r var val 00:06:17.554 14:07:19 -- accel/accel.sh@21 -- # val= 00:06:17.554 14:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # IFS=: 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # read -r var val 00:06:17.554 14:07:19 -- accel/accel.sh@21 -- # val= 00:06:17.554 14:07:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # IFS=: 00:06:17.554 14:07:19 -- accel/accel.sh@20 -- # read -r var val 00:06:17.814 ************************************ 00:06:17.814 END TEST accel_decmop_full 00:06:17.814 ************************************ 00:06:17.814 14:07:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.814 14:07:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:17.814 14:07:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.814 00:06:17.814 real 0m3.804s 00:06:17.814 user 0m3.383s 00:06:17.814 sys 0m0.215s 00:06:17.814 14:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.814 14:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 14:07:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.814 14:07:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:17.814 14:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.814 14:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 ************************************ 00:06:17.814 START TEST accel_decomp_mcore 00:06:17.814 ************************************ 00:06:17.814 14:07:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.814 14:07:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.814 14:07:19 -- accel/accel.sh@17 -- # local accel_module 00:06:17.814 14:07:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.814 14:07:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:17.814 14:07:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.814 14:07:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.814 14:07:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.814 14:07:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.814 14:07:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.814 14:07:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.814 14:07:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.814 14:07:19 -- accel/accel.sh@42 -- # jq -r . 00:06:17.814 [2024-12-04 14:07:19.112175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.814 [2024-12-04 14:07:19.112279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59511 ] 00:06:17.814 [2024-12-04 14:07:19.260264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.073 [2024-12-04 14:07:19.405083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.073 [2024-12-04 14:07:19.405628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.073 [2024-12-04 14:07:19.405834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.073 [2024-12-04 14:07:19.405858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.978 14:07:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:19.978 00:06:19.978 SPDK Configuration: 00:06:19.978 Core mask: 0xf 00:06:19.978 00:06:19.978 Accel Perf Configuration: 00:06:19.978 Workload Type: decompress 00:06:19.978 Transfer size: 4096 bytes 00:06:19.978 Vector count 1 00:06:19.978 Module: software 00:06:19.978 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.978 Queue depth: 32 00:06:19.978 Allocate depth: 32 00:06:19.978 # threads/core: 1 00:06:19.978 Run time: 1 seconds 00:06:19.978 Verify: Yes 00:06:19.978 00:06:19.978 Running for 1 seconds... 00:06:19.978 00:06:19.978 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.978 ------------------------------------------------------------------------------------ 00:06:19.978 0,0 75808/s 139 MiB/s 0 0 00:06:19.978 3,0 58240/s 107 MiB/s 0 0 00:06:19.978 2,0 58240/s 107 MiB/s 0 0 00:06:19.978 1,0 58336/s 107 MiB/s 0 0 00:06:19.978 ==================================================================================== 00:06:19.978 Total 250624/s 979 MiB/s 0 0' 00:06:19.978 14:07:20 -- accel/accel.sh@20 -- # IFS=: 00:06:19.978 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:19.978 14:07:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.978 14:07:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.978 14:07:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.978 14:07:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.978 14:07:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.978 14:07:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.978 14:07:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.978 14:07:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.978 14:07:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.978 14:07:21 -- accel/accel.sh@42 -- # jq -r . 00:06:19.978 [2024-12-04 14:07:21.038118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.978 [2024-12-04 14:07:21.038221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:06:19.978 [2024-12-04 14:07:21.184792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.978 [2024-12-04 14:07:21.334226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.978 [2024-12-04 14:07:21.334464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.978 [2024-12-04 14:07:21.334742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.978 [2024-12-04 14:07:21.334832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val=0xf 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val=decompress 00:06:20.237 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.237 14:07:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.237 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.237 14:07:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val=software 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val=32 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val=32 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val=1 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val=Yes 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:20.238 14:07:21 -- accel/accel.sh@21 -- # val= 00:06:20.238 14:07:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # IFS=: 00:06:20.238 14:07:21 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@21 -- # val= 00:06:21.616 14:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # IFS=: 00:06:21.616 14:07:22 -- accel/accel.sh@20 -- # read -r var val 00:06:21.616 14:07:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.616 14:07:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:21.616 14:07:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.616 00:06:21.616 real 0m3.847s 00:06:21.616 user 0m11.625s 00:06:21.616 sys 0m0.267s 00:06:21.616 14:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.616 14:07:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.616 ************************************ 00:06:21.616 END TEST accel_decomp_mcore 00:06:21.616 ************************************ 00:06:21.616 14:07:22 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.616 14:07:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:21.617 14:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.617 14:07:22 -- common/autotest_common.sh@10 -- # set +x 00:06:21.617 ************************************ 00:06:21.617 START TEST accel_decomp_full_mcore 00:06:21.617 ************************************ 00:06:21.617 14:07:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.617 14:07:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.617 14:07:22 -- accel/accel.sh@17 -- # local accel_module 00:06:21.617 14:07:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.617 14:07:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.617 14:07:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.617 14:07:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.617 14:07:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.617 14:07:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.617 14:07:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.617 14:07:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.617 14:07:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.617 14:07:22 -- accel/accel.sh@42 -- # jq -r . 00:06:21.617 [2024-12-04 14:07:23.010637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.617 [2024-12-04 14:07:23.010718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59579 ] 00:06:21.877 [2024-12-04 14:07:23.151633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.877 [2024-12-04 14:07:23.292335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.877 [2024-12-04 14:07:23.292496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.877 [2024-12-04 14:07:23.292775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.877 [2024-12-04 14:07:23.292805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.782 14:07:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:23.782 00:06:23.782 SPDK Configuration: 00:06:23.782 Core mask: 0xf 00:06:23.782 00:06:23.782 Accel Perf Configuration: 00:06:23.782 Workload Type: decompress 00:06:23.782 Transfer size: 111250 bytes 00:06:23.782 Vector count 1 00:06:23.782 Module: software 00:06:23.782 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.782 Queue depth: 32 00:06:23.782 Allocate depth: 32 00:06:23.782 # threads/core: 1 00:06:23.782 Run time: 1 seconds 00:06:23.782 Verify: Yes 00:06:23.782 00:06:23.782 Running for 1 seconds... 00:06:23.782 00:06:23.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.782 ------------------------------------------------------------------------------------ 00:06:23.783 0,0 5632/s 232 MiB/s 0 0 00:06:23.783 3,0 4320/s 178 MiB/s 0 0 00:06:23.783 2,0 5600/s 231 MiB/s 0 0 00:06:23.783 1,0 4320/s 178 MiB/s 0 0 00:06:23.783 ==================================================================================== 00:06:23.783 Total 19872/s 2108 MiB/s 0 0' 00:06:23.783 14:07:24 -- accel/accel.sh@20 -- # IFS=: 00:06:23.783 14:07:24 -- accel/accel.sh@20 -- # read -r var val 00:06:23.783 14:07:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.783 14:07:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.783 14:07:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.783 14:07:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.783 14:07:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.783 14:07:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.783 14:07:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.783 14:07:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.783 14:07:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.783 14:07:24 -- accel/accel.sh@42 -- # jq -r . 00:06:23.783 [2024-12-04 14:07:24.939475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.783 [2024-12-04 14:07:24.939555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59608 ] 00:06:23.783 [2024-12-04 14:07:25.080477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.783 [2024-12-04 14:07:25.220490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.783 [2024-12-04 14:07:25.221136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.783 [2024-12-04 14:07:25.221285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.783 [2024-12-04 14:07:25.221309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=0xf 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=decompress 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=software 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=32 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=32 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=1 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val=Yes 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:24.045 14:07:25 -- accel/accel.sh@21 -- # val= 00:06:24.045 14:07:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # IFS=: 00:06:24.045 14:07:25 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.421 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.421 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.421 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.422 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.422 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.422 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.422 14:07:26 -- accel/accel.sh@21 -- # val= 00:06:25.422 14:07:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.422 14:07:26 -- accel/accel.sh@20 -- # IFS=: 00:06:25.422 14:07:26 -- accel/accel.sh@20 -- # read -r var val 00:06:25.422 ************************************ 00:06:25.422 END TEST accel_decomp_full_mcore 00:06:25.422 ************************************ 00:06:25.422 14:07:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.422 14:07:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:25.422 14:07:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.422 00:06:25.422 real 0m3.851s 00:06:25.422 user 0m11.739s 00:06:25.422 sys 0m0.264s 00:06:25.422 14:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.422 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 14:07:26 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:25.422 14:07:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:25.422 14:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.422 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 ************************************ 00:06:25.422 START TEST accel_decomp_mthread 00:06:25.422 ************************************ 00:06:25.422 14:07:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:25.422 14:07:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.422 14:07:26 -- accel/accel.sh@17 -- # local accel_module 00:06:25.422 14:07:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:25.422 14:07:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:25.422 14:07:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.422 14:07:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.422 14:07:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.422 14:07:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.422 14:07:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.422 14:07:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.422 14:07:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.422 14:07:26 -- accel/accel.sh@42 -- # jq -r . 00:06:25.683 [2024-12-04 14:07:26.912464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.683 [2024-12-04 14:07:26.912567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:06:25.683 [2024-12-04 14:07:27.061781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.944 [2024-12-04 14:07:27.276994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.853 14:07:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.853 00:06:27.853 SPDK Configuration: 00:06:27.853 Core mask: 0x1 00:06:27.853 00:06:27.853 Accel Perf Configuration: 00:06:27.853 Workload Type: decompress 00:06:27.853 Transfer size: 4096 bytes 00:06:27.853 Vector count 1 00:06:27.853 Module: software 00:06:27.853 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.853 Queue depth: 32 00:06:27.853 Allocate depth: 32 00:06:27.853 # threads/core: 2 00:06:27.853 Run time: 1 seconds 00:06:27.853 Verify: Yes 00:06:27.853 00:06:27.853 Running for 1 seconds... 00:06:27.853 00:06:27.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.853 ------------------------------------------------------------------------------------ 00:06:27.853 0,1 31296/s 57 MiB/s 0 0 00:06:27.853 0,0 31168/s 57 MiB/s 0 0 00:06:27.853 ==================================================================================== 00:06:27.853 Total 62464/s 244 MiB/s 0 0' 00:06:27.853 14:07:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.853 14:07:28 -- accel/accel.sh@20 -- # IFS=: 00:06:27.853 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:27.853 14:07:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.853 14:07:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.853 14:07:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.853 14:07:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.853 14:07:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.853 14:07:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.853 14:07:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.853 14:07:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.853 14:07:29 -- accel/accel.sh@42 -- # jq -r . 00:06:27.853 [2024-12-04 14:07:29.034794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.853 [2024-12-04 14:07:29.034895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:06:27.853 [2024-12-04 14:07:29.180342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.111 [2024-12-04 14:07:29.318391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=0x1 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=decompress 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=software 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=32 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=32 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=2 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val=Yes 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:28.111 14:07:29 -- accel/accel.sh@21 -- # val= 00:06:28.111 14:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # IFS=: 00:06:28.111 14:07:29 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@21 -- # val= 00:06:29.488 14:07:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # IFS=: 00:06:29.488 14:07:30 -- accel/accel.sh@20 -- # read -r var val 00:06:29.488 14:07:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.488 14:07:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:29.488 14:07:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.488 00:06:29.488 real 0m4.029s 00:06:29.488 user 0m3.537s 00:06:29.488 sys 0m0.276s 00:06:29.488 14:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.488 14:07:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.488 ************************************ 00:06:29.488 END TEST accel_decomp_mthread 00:06:29.488 ************************************ 00:06:29.488 14:07:30 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.488 14:07:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:29.488 14:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.488 14:07:30 -- common/autotest_common.sh@10 -- # set +x 00:06:29.749 ************************************ 00:06:29.749 START TEST accel_deomp_full_mthread 00:06:29.749 ************************************ 00:06:29.749 14:07:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.749 14:07:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.749 14:07:30 -- accel/accel.sh@17 -- # local accel_module 00:06:29.749 14:07:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.749 14:07:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.749 14:07:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.749 14:07:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.749 14:07:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.749 14:07:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.749 14:07:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.749 14:07:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.749 14:07:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.749 14:07:30 -- accel/accel.sh@42 -- # jq -r . 00:06:29.749 [2024-12-04 14:07:30.992177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.749 [2024-12-04 14:07:30.992256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59721 ] 00:06:29.749 [2024-12-04 14:07:31.134349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.011 [2024-12-04 14:07:31.316314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.924 14:07:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:31.924 00:06:31.924 SPDK Configuration: 00:06:31.924 Core mask: 0x1 00:06:31.924 00:06:31.924 Accel Perf Configuration: 00:06:31.924 Workload Type: decompress 00:06:31.924 Transfer size: 111250 bytes 00:06:31.924 Vector count 1 00:06:31.924 Module: software 00:06:31.924 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.924 Queue depth: 32 00:06:31.924 Allocate depth: 32 00:06:31.924 # threads/core: 2 00:06:31.924 Run time: 1 seconds 00:06:31.924 Verify: Yes 00:06:31.924 00:06:31.924 Running for 1 seconds... 00:06:31.924 00:06:31.924 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.924 ------------------------------------------------------------------------------------ 00:06:31.924 0,1 2176/s 89 MiB/s 0 0 00:06:31.924 0,0 2144/s 88 MiB/s 0 0 00:06:31.924 ==================================================================================== 00:06:31.924 Total 4320/s 458 MiB/s 0 0' 00:06:31.924 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:31.924 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:31.924 14:07:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.924 14:07:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.924 14:07:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.924 14:07:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.924 14:07:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.924 14:07:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.924 14:07:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.924 14:07:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.924 14:07:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.924 14:07:33 -- accel/accel.sh@42 -- # jq -r . 00:06:31.924 [2024-12-04 14:07:33.234640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.924 [2024-12-04 14:07:33.234762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59748 ] 00:06:31.924 [2024-12-04 14:07:33.384717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.181 [2024-12-04 14:07:33.534649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=0x1 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=decompress 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=software 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=32 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=32 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=2 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val=Yes 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:32.440 14:07:33 -- accel/accel.sh@21 -- # val= 00:06:32.440 14:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # IFS=: 00:06:32.440 14:07:33 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@21 -- # val= 00:06:33.845 14:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # IFS=: 00:06:33.845 14:07:35 -- accel/accel.sh@20 -- # read -r var val 00:06:33.845 14:07:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.845 14:07:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:33.845 14:07:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.845 00:06:33.846 real 0m4.186s 00:06:33.846 user 0m3.703s 00:06:33.846 sys 0m0.268s 00:06:33.846 14:07:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.846 ************************************ 00:06:33.846 END TEST accel_deomp_full_mthread 00:06:33.846 ************************************ 00:06:33.846 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.846 14:07:35 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:33.846 14:07:35 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.846 14:07:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:33.846 14:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.846 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:06:33.846 14:07:35 -- accel/accel.sh@129 -- # build_accel_config 00:06:33.846 14:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.846 14:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.846 14:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.846 14:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.846 14:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.846 14:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.846 14:07:35 -- accel/accel.sh@42 -- # jq -r . 00:06:33.846 ************************************ 00:06:33.846 START TEST accel_dif_functional_tests 00:06:33.846 ************************************ 00:06:33.846 14:07:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.846 [2024-12-04 14:07:35.256655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.846 [2024-12-04 14:07:35.256768] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59790 ] 00:06:34.105 [2024-12-04 14:07:35.403887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.105 [2024-12-04 14:07:35.555343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.105 [2024-12-04 14:07:35.555503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.105 [2024-12-04 14:07:35.555584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.363 00:06:34.363 00:06:34.363 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.363 http://cunit.sourceforge.net/ 00:06:34.363 00:06:34.363 00:06:34.363 Suite: accel_dif 00:06:34.363 Test: verify: DIF generated, GUARD check ...passed 00:06:34.363 Test: verify: DIF generated, APPTAG check ...passed 00:06:34.363 Test: verify: DIF generated, REFTAG check ...passed 00:06:34.363 Test: verify: DIF not generated, GUARD check ...passed 00:06:34.363 Test: verify: DIF not generated, APPTAG check ...passed 00:06:34.363 Test: verify: DIF not generated, REFTAG check ...passed 00:06:34.363 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:34.363 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-04 14:07:35.728341] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.363 [2024-12-04 14:07:35.728398] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.363 [2024-12-04 14:07:35.728443] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.363 [2024-12-04 14:07:35.728471] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.363 [2024-12-04 14:07:35.728493] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.363 [2024-12-04 14:07:35.728511] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.363 passed 00:06:34.363 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:34.363 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:34.363 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:34.363 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:34.363 Test: generate copy: DIF generated, GUARD check ...passed 00:06:34.363 Test: generate copy: DIF generated, APTTAG check ...[2024-12-04 14:07:35.728576] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:34.363 [2024-12-04 14:07:35.728763] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:34.363 passed 00:06:34.363 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:34.363 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:34.363 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:34.363 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:34.363 Test: generate copy: iovecs-len validate ...passed 00:06:34.363 Test: generate copy: buffer alignment validate ...passed 00:06:34.363 00:06:34.363 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.363 suites 1 1 n/a 0 0 00:06:34.363 tests 20 20 20 0 0 00:06:34.363 asserts 204 204 204 0 n/a 00:06:34.363 00:06:34.363 Elapsed time = 0.002 seconds 00:06:34.363 [2024-12-04 14:07:35.729153] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:34.932 00:06:34.932 real 0m1.133s 00:06:34.932 user 0m2.010s 00:06:34.932 sys 0m0.159s 00:06:34.932 14:07:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.932 ************************************ 00:06:34.932 END TEST accel_dif_functional_tests 00:06:34.932 ************************************ 00:06:34.932 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.932 00:06:34.932 real 1m25.073s 00:06:34.932 user 1m32.238s 00:06:34.932 sys 0m6.303s 00:06:34.932 ************************************ 00:06:34.932 END TEST accel 00:06:34.932 ************************************ 00:06:34.932 14:07:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.932 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.193 14:07:36 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:35.193 14:07:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.193 14:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.193 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.193 ************************************ 00:06:35.193 START TEST accel_rpc 00:06:35.193 ************************************ 00:06:35.193 14:07:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:35.193 * Looking for test storage... 00:06:35.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:35.193 14:07:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.193 14:07:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:35.193 14:07:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.193 14:07:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:35.193 14:07:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:35.193 14:07:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:35.193 14:07:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:35.193 14:07:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:35.193 14:07:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:35.193 14:07:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.193 14:07:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:35.193 14:07:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:35.193 14:07:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:35.193 14:07:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:35.193 14:07:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:35.193 14:07:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:35.193 14:07:36 -- scripts/common.sh@344 -- # : 1 00:06:35.193 14:07:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:35.193 14:07:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.193 14:07:36 -- scripts/common.sh@364 -- # decimal 1 00:06:35.194 14:07:36 -- scripts/common.sh@352 -- # local d=1 00:06:35.194 14:07:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.194 14:07:36 -- scripts/common.sh@354 -- # echo 1 00:06:35.194 14:07:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:35.194 14:07:36 -- scripts/common.sh@365 -- # decimal 2 00:06:35.194 14:07:36 -- scripts/common.sh@352 -- # local d=2 00:06:35.194 14:07:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.194 14:07:36 -- scripts/common.sh@354 -- # echo 2 00:06:35.194 14:07:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:35.194 14:07:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:35.194 14:07:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:35.194 14:07:36 -- scripts/common.sh@367 -- # return 0 00:06:35.194 14:07:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.194 14:07:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:35.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.194 --rc genhtml_branch_coverage=1 00:06:35.194 --rc genhtml_function_coverage=1 00:06:35.194 --rc genhtml_legend=1 00:06:35.194 --rc geninfo_all_blocks=1 00:06:35.194 --rc geninfo_unexecuted_blocks=1 00:06:35.194 00:06:35.194 ' 00:06:35.194 14:07:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:35.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.194 --rc genhtml_branch_coverage=1 00:06:35.194 --rc genhtml_function_coverage=1 00:06:35.194 --rc genhtml_legend=1 00:06:35.194 --rc geninfo_all_blocks=1 00:06:35.194 --rc geninfo_unexecuted_blocks=1 00:06:35.194 00:06:35.194 ' 00:06:35.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.194 14:07:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:35.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.194 --rc genhtml_branch_coverage=1 00:06:35.194 --rc genhtml_function_coverage=1 00:06:35.194 --rc genhtml_legend=1 00:06:35.194 --rc geninfo_all_blocks=1 00:06:35.194 --rc geninfo_unexecuted_blocks=1 00:06:35.194 00:06:35.194 ' 00:06:35.194 14:07:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:35.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.194 --rc genhtml_branch_coverage=1 00:06:35.194 --rc genhtml_function_coverage=1 00:06:35.194 --rc genhtml_legend=1 00:06:35.194 --rc geninfo_all_blocks=1 00:06:35.194 --rc geninfo_unexecuted_blocks=1 00:06:35.194 00:06:35.194 ' 00:06:35.194 14:07:36 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.194 14:07:36 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59868 00:06:35.194 14:07:36 -- accel/accel_rpc.sh@15 -- # waitforlisten 59868 00:06:35.194 14:07:36 -- common/autotest_common.sh@829 -- # '[' -z 59868 ']' 00:06:35.194 14:07:36 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:35.194 14:07:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.194 14:07:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.194 14:07:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.194 14:07:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.194 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:06:35.194 [2024-12-04 14:07:36.645828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.194 [2024-12-04 14:07:36.646234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:06:35.453 [2024-12-04 14:07:36.795906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.711 [2024-12-04 14:07:36.950380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.711 [2024-12-04 14:07:36.950644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.278 14:07:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.278 14:07:37 -- common/autotest_common.sh@862 -- # return 0 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:36.278 14:07:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.278 14:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.278 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.278 ************************************ 00:06:36.278 START TEST accel_assign_opcode 00:06:36.278 ************************************ 00:06:36.278 14:07:37 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:36.278 14:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.278 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.278 [2024-12-04 14:07:37.471180] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:36.278 14:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:36.278 14:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.278 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.278 [2024-12-04 14:07:37.479142] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:36.278 14:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.278 14:07:37 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:36.278 14:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.278 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.535 14:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.535 14:07:37 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:36.535 14:07:37 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:36.535 14:07:37 -- accel/accel_rpc.sh@42 -- # grep software 00:06:36.535 14:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.535 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.535 14:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.535 software 00:06:36.535 ************************************ 00:06:36.535 END TEST accel_assign_opcode 00:06:36.535 ************************************ 00:06:36.535 00:06:36.535 real 0m0.475s 00:06:36.535 user 0m0.033s 00:06:36.535 sys 0m0.012s 00:06:36.535 14:07:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.535 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:06:36.535 14:07:37 -- accel/accel_rpc.sh@55 -- # killprocess 59868 00:06:36.535 14:07:37 -- common/autotest_common.sh@936 -- # '[' -z 59868 ']' 00:06:36.535 14:07:37 -- common/autotest_common.sh@940 -- # kill -0 59868 00:06:36.535 14:07:37 -- common/autotest_common.sh@941 -- # uname 00:06:36.535 14:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.535 14:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59868 00:06:36.535 killing process with pid 59868 00:06:36.535 14:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.535 14:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.535 14:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59868' 00:06:36.535 14:07:37 -- common/autotest_common.sh@955 -- # kill 59868 00:06:36.535 14:07:37 -- common/autotest_common.sh@960 -- # wait 59868 00:06:37.913 ************************************ 00:06:37.913 END TEST accel_rpc 00:06:37.913 ************************************ 00:06:37.913 00:06:37.913 real 0m2.739s 00:06:37.913 user 0m2.695s 00:06:37.913 sys 0m0.403s 00:06:37.913 14:07:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.913 14:07:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.913 14:07:39 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:37.913 14:07:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.913 14:07:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.913 14:07:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.913 ************************************ 00:06:37.913 START TEST app_cmdline 00:06:37.913 ************************************ 00:06:37.913 14:07:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:37.913 * Looking for test storage... 00:06:37.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:37.913 14:07:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:37.913 14:07:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:37.913 14:07:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:37.913 14:07:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:37.913 14:07:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:37.913 14:07:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:37.913 14:07:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:37.913 14:07:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:37.913 14:07:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:37.913 14:07:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.913 14:07:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:37.913 14:07:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:37.913 14:07:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:37.913 14:07:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:37.913 14:07:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:37.913 14:07:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:37.913 14:07:39 -- scripts/common.sh@344 -- # : 1 00:06:37.913 14:07:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:37.913 14:07:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.913 14:07:39 -- scripts/common.sh@364 -- # decimal 1 00:06:37.913 14:07:39 -- scripts/common.sh@352 -- # local d=1 00:06:37.913 14:07:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.913 14:07:39 -- scripts/common.sh@354 -- # echo 1 00:06:37.913 14:07:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:37.913 14:07:39 -- scripts/common.sh@365 -- # decimal 2 00:06:37.913 14:07:39 -- scripts/common.sh@352 -- # local d=2 00:06:37.913 14:07:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.913 14:07:39 -- scripts/common.sh@354 -- # echo 2 00:06:37.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.913 14:07:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:37.913 14:07:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:37.913 14:07:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:37.913 14:07:39 -- scripts/common.sh@367 -- # return 0 00:06:37.913 14:07:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.913 14:07:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:37.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.913 --rc genhtml_branch_coverage=1 00:06:37.913 --rc genhtml_function_coverage=1 00:06:37.913 --rc genhtml_legend=1 00:06:37.913 --rc geninfo_all_blocks=1 00:06:37.913 --rc geninfo_unexecuted_blocks=1 00:06:37.913 00:06:37.913 ' 00:06:37.913 14:07:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:37.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.913 --rc genhtml_branch_coverage=1 00:06:37.913 --rc genhtml_function_coverage=1 00:06:37.913 --rc genhtml_legend=1 00:06:37.913 --rc geninfo_all_blocks=1 00:06:37.913 --rc geninfo_unexecuted_blocks=1 00:06:37.913 00:06:37.913 ' 00:06:37.913 14:07:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:37.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.913 --rc genhtml_branch_coverage=1 00:06:37.913 --rc genhtml_function_coverage=1 00:06:37.913 --rc genhtml_legend=1 00:06:37.913 --rc geninfo_all_blocks=1 00:06:37.913 --rc geninfo_unexecuted_blocks=1 00:06:37.913 00:06:37.913 ' 00:06:37.913 14:07:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:37.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.913 --rc genhtml_branch_coverage=1 00:06:37.913 --rc genhtml_function_coverage=1 00:06:37.913 --rc genhtml_legend=1 00:06:37.913 --rc geninfo_all_blocks=1 00:06:37.913 --rc geninfo_unexecuted_blocks=1 00:06:37.913 00:06:37.913 ' 00:06:37.913 14:07:39 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:37.913 14:07:39 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59980 00:06:37.913 14:07:39 -- app/cmdline.sh@18 -- # waitforlisten 59980 00:06:37.913 14:07:39 -- common/autotest_common.sh@829 -- # '[' -z 59980 ']' 00:06:37.913 14:07:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.913 14:07:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.913 14:07:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.913 14:07:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.913 14:07:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.913 14:07:39 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.173 [2024-12-04 14:07:39.414952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.173 [2024-12-04 14:07:39.415114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59980 ] 00:06:38.173 [2024-12-04 14:07:39.567925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.432 [2024-12-04 14:07:39.722821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.432 [2024-12-04 14:07:39.722976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.000 14:07:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.000 14:07:40 -- common/autotest_common.sh@862 -- # return 0 00:06:39.000 14:07:40 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:39.000 { 00:06:39.000 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:39.000 "fields": { 00:06:39.000 "major": 24, 00:06:39.000 "minor": 1, 00:06:39.000 "patch": 1, 00:06:39.000 "suffix": "-pre", 00:06:39.000 "commit": "c13c99a5e" 00:06:39.000 } 00:06:39.000 } 00:06:39.000 14:07:40 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.000 14:07:40 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.000 14:07:40 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.000 14:07:40 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.000 14:07:40 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.000 14:07:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.000 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:06:39.000 14:07:40 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.000 14:07:40 -- app/cmdline.sh@26 -- # sort 00:06:39.000 14:07:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.000 14:07:40 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.000 14:07:40 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.001 14:07:40 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.001 14:07:40 -- common/autotest_common.sh@650 -- # local es=0 00:06:39.001 14:07:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.001 14:07:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.001 14:07:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.001 14:07:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.001 14:07:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.001 14:07:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.001 14:07:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.001 14:07:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.001 14:07:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:39.001 14:07:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.260 request: 00:06:39.260 { 00:06:39.260 "method": "env_dpdk_get_mem_stats", 00:06:39.260 "req_id": 1 00:06:39.260 } 00:06:39.260 Got JSON-RPC error response 00:06:39.260 response: 00:06:39.260 { 00:06:39.260 "code": -32601, 00:06:39.260 "message": "Method not found" 00:06:39.260 } 00:06:39.260 14:07:40 -- common/autotest_common.sh@653 -- # es=1 00:06:39.260 14:07:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.260 14:07:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.260 14:07:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.260 14:07:40 -- app/cmdline.sh@1 -- # killprocess 59980 00:06:39.260 14:07:40 -- common/autotest_common.sh@936 -- # '[' -z 59980 ']' 00:06:39.260 14:07:40 -- common/autotest_common.sh@940 -- # kill -0 59980 00:06:39.260 14:07:40 -- common/autotest_common.sh@941 -- # uname 00:06:39.260 14:07:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.260 14:07:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59980 00:06:39.260 killing process with pid 59980 00:06:39.260 14:07:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.260 14:07:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.260 14:07:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59980' 00:06:39.260 14:07:40 -- common/autotest_common.sh@955 -- # kill 59980 00:06:39.260 14:07:40 -- common/autotest_common.sh@960 -- # wait 59980 00:06:40.636 ************************************ 00:06:40.636 END TEST app_cmdline 00:06:40.636 ************************************ 00:06:40.636 00:06:40.636 real 0m2.637s 00:06:40.636 user 0m2.907s 00:06:40.636 sys 0m0.422s 00:06:40.636 14:07:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.636 14:07:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.636 14:07:41 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.636 14:07:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.636 14:07:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.636 14:07:41 -- common/autotest_common.sh@10 -- # set +x 00:06:40.636 ************************************ 00:06:40.636 START TEST version 00:06:40.636 ************************************ 00:06:40.636 14:07:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.636 * Looking for test storage... 00:06:40.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.636 14:07:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.636 14:07:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.636 14:07:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.636 14:07:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.636 14:07:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.636 14:07:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.636 14:07:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.636 14:07:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.636 14:07:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.636 14:07:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.636 14:07:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.636 14:07:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.636 14:07:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.636 14:07:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.636 14:07:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.636 14:07:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.636 14:07:42 -- scripts/common.sh@344 -- # : 1 00:06:40.636 14:07:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.636 14:07:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.636 14:07:42 -- scripts/common.sh@364 -- # decimal 1 00:06:40.636 14:07:42 -- scripts/common.sh@352 -- # local d=1 00:06:40.636 14:07:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.636 14:07:42 -- scripts/common.sh@354 -- # echo 1 00:06:40.636 14:07:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.636 14:07:42 -- scripts/common.sh@365 -- # decimal 2 00:06:40.636 14:07:42 -- scripts/common.sh@352 -- # local d=2 00:06:40.636 14:07:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.636 14:07:42 -- scripts/common.sh@354 -- # echo 2 00:06:40.636 14:07:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.636 14:07:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.636 14:07:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.636 14:07:42 -- scripts/common.sh@367 -- # return 0 00:06:40.636 14:07:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.636 14:07:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.636 --rc genhtml_branch_coverage=1 00:06:40.636 --rc genhtml_function_coverage=1 00:06:40.636 --rc genhtml_legend=1 00:06:40.636 --rc geninfo_all_blocks=1 00:06:40.636 --rc geninfo_unexecuted_blocks=1 00:06:40.636 00:06:40.636 ' 00:06:40.636 14:07:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.636 --rc genhtml_branch_coverage=1 00:06:40.636 --rc genhtml_function_coverage=1 00:06:40.636 --rc genhtml_legend=1 00:06:40.636 --rc geninfo_all_blocks=1 00:06:40.636 --rc geninfo_unexecuted_blocks=1 00:06:40.636 00:06:40.636 ' 00:06:40.636 14:07:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.636 --rc genhtml_branch_coverage=1 00:06:40.636 --rc genhtml_function_coverage=1 00:06:40.636 --rc genhtml_legend=1 00:06:40.636 --rc geninfo_all_blocks=1 00:06:40.636 --rc geninfo_unexecuted_blocks=1 00:06:40.636 00:06:40.636 ' 00:06:40.636 14:07:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.636 --rc genhtml_branch_coverage=1 00:06:40.636 --rc genhtml_function_coverage=1 00:06:40.636 --rc genhtml_legend=1 00:06:40.636 --rc geninfo_all_blocks=1 00:06:40.636 --rc geninfo_unexecuted_blocks=1 00:06:40.636 00:06:40.636 ' 00:06:40.636 14:07:42 -- app/version.sh@17 -- # get_header_version major 00:06:40.636 14:07:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.636 14:07:42 -- app/version.sh@14 -- # tr -d '"' 00:06:40.636 14:07:42 -- app/version.sh@14 -- # cut -f2 00:06:40.636 14:07:42 -- app/version.sh@17 -- # major=24 00:06:40.636 14:07:42 -- app/version.sh@18 -- # get_header_version minor 00:06:40.636 14:07:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.636 14:07:42 -- app/version.sh@14 -- # cut -f2 00:06:40.636 14:07:42 -- app/version.sh@14 -- # tr -d '"' 00:06:40.636 14:07:42 -- app/version.sh@18 -- # minor=1 00:06:40.636 14:07:42 -- app/version.sh@19 -- # get_header_version patch 00:06:40.636 14:07:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.636 14:07:42 -- app/version.sh@14 -- # cut -f2 00:06:40.636 14:07:42 -- app/version.sh@14 -- # tr -d '"' 00:06:40.636 14:07:42 -- app/version.sh@19 -- # patch=1 00:06:40.636 14:07:42 -- app/version.sh@20 -- # get_header_version suffix 00:06:40.636 14:07:42 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:40.636 14:07:42 -- app/version.sh@14 -- # cut -f2 00:06:40.636 14:07:42 -- app/version.sh@14 -- # tr -d '"' 00:06:40.636 14:07:42 -- app/version.sh@20 -- # suffix=-pre 00:06:40.636 14:07:42 -- app/version.sh@22 -- # version=24.1 00:06:40.636 14:07:42 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:40.636 14:07:42 -- app/version.sh@25 -- # version=24.1.1 00:06:40.636 14:07:42 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:40.636 14:07:42 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:40.636 14:07:42 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.636 14:07:42 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:40.636 14:07:42 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:40.636 00:06:40.636 real 0m0.183s 00:06:40.636 user 0m0.122s 00:06:40.636 sys 0m0.087s 00:06:40.636 ************************************ 00:06:40.636 END TEST version 00:06:40.636 ************************************ 00:06:40.637 14:07:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.637 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.898 14:07:42 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:40.898 14:07:42 -- spdk/autotest.sh@191 -- # uname -s 00:06:40.898 14:07:42 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:40.898 14:07:42 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:40.898 14:07:42 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:40.898 14:07:42 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:06:40.898 14:07:42 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:40.898 14:07:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:40.898 14:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.898 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.898 ************************************ 00:06:40.898 START TEST blockdev_nvme 00:06:40.898 ************************************ 00:06:40.898 14:07:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:40.898 * Looking for test storage... 00:06:40.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.898 14:07:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.898 14:07:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.898 14:07:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.898 14:07:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.898 14:07:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.898 14:07:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.898 14:07:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.898 14:07:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.898 14:07:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.898 14:07:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.898 14:07:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.898 14:07:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.898 14:07:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.898 14:07:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.898 14:07:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.898 14:07:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.898 14:07:42 -- scripts/common.sh@344 -- # : 1 00:06:40.898 14:07:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.898 14:07:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.898 14:07:42 -- scripts/common.sh@364 -- # decimal 1 00:06:40.898 14:07:42 -- scripts/common.sh@352 -- # local d=1 00:06:40.898 14:07:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.898 14:07:42 -- scripts/common.sh@354 -- # echo 1 00:06:40.898 14:07:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.898 14:07:42 -- scripts/common.sh@365 -- # decimal 2 00:06:40.898 14:07:42 -- scripts/common.sh@352 -- # local d=2 00:06:40.898 14:07:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.898 14:07:42 -- scripts/common.sh@354 -- # echo 2 00:06:40.898 14:07:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.898 14:07:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.898 14:07:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.898 14:07:42 -- scripts/common.sh@367 -- # return 0 00:06:40.898 14:07:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.898 14:07:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.898 --rc genhtml_branch_coverage=1 00:06:40.898 --rc genhtml_function_coverage=1 00:06:40.898 --rc genhtml_legend=1 00:06:40.898 --rc geninfo_all_blocks=1 00:06:40.898 --rc geninfo_unexecuted_blocks=1 00:06:40.898 00:06:40.898 ' 00:06:40.898 14:07:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.898 --rc genhtml_branch_coverage=1 00:06:40.898 --rc genhtml_function_coverage=1 00:06:40.898 --rc genhtml_legend=1 00:06:40.898 --rc geninfo_all_blocks=1 00:06:40.898 --rc geninfo_unexecuted_blocks=1 00:06:40.898 00:06:40.898 ' 00:06:40.898 14:07:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.898 --rc genhtml_branch_coverage=1 00:06:40.898 --rc genhtml_function_coverage=1 00:06:40.898 --rc genhtml_legend=1 00:06:40.898 --rc geninfo_all_blocks=1 00:06:40.898 --rc geninfo_unexecuted_blocks=1 00:06:40.898 00:06:40.898 ' 00:06:40.898 14:07:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.898 --rc genhtml_branch_coverage=1 00:06:40.898 --rc genhtml_function_coverage=1 00:06:40.898 --rc genhtml_legend=1 00:06:40.898 --rc geninfo_all_blocks=1 00:06:40.898 --rc geninfo_unexecuted_blocks=1 00:06:40.898 00:06:40.898 ' 00:06:40.898 14:07:42 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.898 14:07:42 -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.898 14:07:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:40.898 14:07:42 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.898 14:07:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:40.898 14:07:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:40.898 14:07:42 -- bdev/blockdev.sh@18 -- # : 00:06:40.898 14:07:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:06:40.898 14:07:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:06:40.898 14:07:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:06:40.898 14:07:42 -- bdev/blockdev.sh@672 -- # uname -s 00:06:40.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.898 14:07:42 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:06:40.898 14:07:42 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:06:40.898 14:07:42 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:06:40.898 14:07:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:06:40.898 14:07:42 -- bdev/blockdev.sh@682 -- # dek= 00:06:40.898 14:07:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:06:40.898 14:07:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:06:40.898 14:07:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:06:40.898 14:07:42 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:06:40.898 14:07:42 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:06:40.898 14:07:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:06:40.898 14:07:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60150 00:06:40.898 14:07:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:40.898 14:07:42 -- bdev/blockdev.sh@47 -- # waitforlisten 60150 00:06:40.898 14:07:42 -- common/autotest_common.sh@829 -- # '[' -z 60150 ']' 00:06:40.898 14:07:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.898 14:07:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.898 14:07:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.898 14:07:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.898 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.898 14:07:42 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:41.158 [2024-12-04 14:07:42.361860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.158 [2024-12-04 14:07:42.361997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60150 ] 00:06:41.158 [2024-12-04 14:07:42.511787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.416 [2024-12-04 14:07:42.666273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.416 [2024-12-04 14:07:42.666429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.984 14:07:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.984 14:07:43 -- common/autotest_common.sh@862 -- # return 0 00:06:41.984 14:07:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:06:41.984 14:07:43 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:06:41.984 14:07:43 -- bdev/blockdev.sh@79 -- # local json 00:06:41.984 14:07:43 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:06:41.984 14:07:43 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:41.984 14:07:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:06:41.984 14:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.984 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.243 14:07:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:06:42.243 14:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.243 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.243 14:07:43 -- bdev/blockdev.sh@738 -- # cat 00:06:42.243 14:07:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:06:42.243 14:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.243 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.243 14:07:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:06:42.243 14:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.243 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.243 14:07:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:42.243 14:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.243 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.243 14:07:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:06:42.243 14:07:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:06:42.243 14:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.243 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:07:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:06:42.243 14:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.243 14:07:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:06:42.243 14:07:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:06:42.244 14:07:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "68cf5053-fe56-46e4-ae62-2496941dcc17"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "68cf5053-fe56-46e4-ae62-2496941dcc17",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d75ecb4e-dea7-4adf-b918-9478e1d6219b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d75ecb4e-dea7-4adf-b918-9478e1d6219b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c1153a4e-6206-42e9-a0f1-ac95e277de3c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1153a4e-6206-42e9-a0f1-ac95e277de3c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2f380c6c-81a4-431a-9c14-f7e3c8f38be8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f380c6c-81a4-431a-9c14-f7e3c8f38be8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ace136c5-6ce1-4edc-903b-59b125b825e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ace136c5-6ce1-4edc-903b-59b125b825e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5c7390dd-7cab-4b02-8bfd-edc7a5c16f8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5c7390dd-7cab-4b02-8bfd-edc7a5c16f8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:42.244 14:07:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:06:42.244 14:07:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:06:42.244 14:07:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:06:42.244 14:07:43 -- bdev/blockdev.sh@752 -- # killprocess 60150 00:06:42.244 14:07:43 -- common/autotest_common.sh@936 -- # '[' -z 60150 ']' 00:06:42.244 14:07:43 -- common/autotest_common.sh@940 -- # kill -0 60150 00:06:42.244 14:07:43 -- common/autotest_common.sh@941 -- # uname 00:06:42.244 14:07:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.244 14:07:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60150 00:06:42.244 killing process with pid 60150 00:06:42.244 14:07:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.244 14:07:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.244 14:07:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60150' 00:06:42.244 14:07:43 -- common/autotest_common.sh@955 -- # kill 60150 00:06:42.244 14:07:43 -- common/autotest_common.sh@960 -- # wait 60150 00:06:43.619 14:07:44 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:43.619 14:07:44 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:43.619 14:07:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.619 14:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.619 14:07:44 -- common/autotest_common.sh@10 -- # set +x 00:06:43.619 ************************************ 00:06:43.619 START TEST bdev_hello_world 00:06:43.619 ************************************ 00:06:43.619 14:07:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:43.619 [2024-12-04 14:07:44.876646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.619 [2024-12-04 14:07:44.876837] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:06:43.619 [2024-12-04 14:07:45.010829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.877 [2024-12-04 14:07:45.151417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.443 [2024-12-04 14:07:45.614286] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:44.443 [2024-12-04 14:07:45.614326] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:44.443 [2024-12-04 14:07:45.614340] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:44.443 [2024-12-04 14:07:45.616200] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:44.443 [2024-12-04 14:07:45.616531] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:44.443 [2024-12-04 14:07:45.616554] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:44.443 [2024-12-04 14:07:45.616753] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:44.443 00:06:44.444 [2024-12-04 14:07:45.616769] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:45.012 00:06:45.012 real 0m1.403s 00:06:45.012 user 0m1.155s 00:06:45.012 sys 0m0.144s 00:06:45.012 14:07:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.012 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.012 ************************************ 00:06:45.012 END TEST bdev_hello_world 00:06:45.012 ************************************ 00:06:45.012 14:07:46 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:06:45.012 14:07:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:45.012 14:07:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.012 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.012 ************************************ 00:06:45.012 START TEST bdev_bounds 00:06:45.012 ************************************ 00:06:45.012 14:07:46 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:06:45.012 14:07:46 -- bdev/blockdev.sh@288 -- # bdevio_pid=60265 00:06:45.012 14:07:46 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.012 Process bdevio pid: 60265 00:06:45.012 14:07:46 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60265' 00:06:45.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.012 14:07:46 -- bdev/blockdev.sh@291 -- # waitforlisten 60265 00:06:45.012 14:07:46 -- common/autotest_common.sh@829 -- # '[' -z 60265 ']' 00:06:45.012 14:07:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.012 14:07:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.012 14:07:46 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:45.012 14:07:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.012 14:07:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.012 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:06:45.012 [2024-12-04 14:07:46.343492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.012 [2024-12-04 14:07:46.343601] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:06:45.271 [2024-12-04 14:07:46.492784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.271 [2024-12-04 14:07:46.637851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.271 [2024-12-04 14:07:46.638143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.271 [2024-12-04 14:07:46.638171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.839 14:07:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.839 14:07:47 -- common/autotest_common.sh@862 -- # return 0 00:06:45.839 14:07:47 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:45.839 I/O targets: 00:06:45.839 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:45.839 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:45.839 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:45.839 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:45.839 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:45.839 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:45.839 00:06:45.839 00:06:45.839 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.839 http://cunit.sourceforge.net/ 00:06:45.839 00:06:45.839 00:06:45.839 Suite: bdevio tests on: Nvme3n1 00:06:45.839 Test: blockdev write read block ...passed 00:06:45.839 Test: blockdev write zeroes read block ...passed 00:06:45.839 Test: blockdev write zeroes read no split ...passed 00:06:45.839 Test: blockdev write zeroes read split ...passed 00:06:46.098 Test: blockdev write zeroes read split partial ...passed 00:06:46.098 Test: blockdev reset ...[2024-12-04 14:07:47.316233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:06:46.098 [2024-12-04 14:07:47.318978] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.098 passed 00:06:46.098 Test: blockdev write read 8 blocks ...passed 00:06:46.098 Test: blockdev write read size > 128k ...passed 00:06:46.098 Test: blockdev write read invalid size ...passed 00:06:46.098 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.098 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.098 Test: blockdev write read max offset ...passed 00:06:46.098 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.098 Test: blockdev writev readv 8 blocks ...passed 00:06:46.098 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.098 Test: blockdev writev readv block ...passed 00:06:46.098 Test: blockdev writev readv size > 128k ...passed 00:06:46.098 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.098 Test: blockdev comparev and writev ...[2024-12-04 14:07:47.326275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fc0e000 len:0x1000 00:06:46.098 [2024-12-04 14:07:47.326323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.098 passed 00:06:46.098 Test: blockdev nvme passthru rw ...passed 00:06:46.099 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.099 Test: blockdev nvme admin passthru ...[2024-12-04 14:07:47.326900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.099 [2024-12-04 14:07:47.326929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.099 passed 00:06:46.099 Test: blockdev copy ...passed 00:06:46.099 Suite: bdevio tests on: Nvme2n3 00:06:46.099 Test: blockdev write read block ...passed 00:06:46.099 Test: blockdev write zeroes read block ...passed 00:06:46.099 Test: blockdev write zeroes read no split ...passed 00:06:46.099 Test: blockdev write zeroes read split ...passed 00:06:46.099 Test: blockdev write zeroes read split partial ...passed 00:06:46.099 Test: blockdev reset ...[2024-12-04 14:07:47.384077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:06:46.099 [2024-12-04 14:07:47.386746] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.099 passed 00:06:46.099 Test: blockdev write read 8 blocks ...passed 00:06:46.099 Test: blockdev write read size > 128k ...passed 00:06:46.099 Test: blockdev write read invalid size ...passed 00:06:46.099 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.099 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.099 Test: blockdev write read max offset ...passed 00:06:46.099 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.099 Test: blockdev writev readv 8 blocks ...passed 00:06:46.099 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.099 Test: blockdev writev readv block ...passed 00:06:46.099 Test: blockdev writev readv size > 128k ...passed 00:06:46.099 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.099 Test: blockdev comparev and writev ...[2024-12-04 14:07:47.394188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26fc0a000 len:0x1000 00:06:46.099 [2024-12-04 14:07:47.394226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.099 passed 00:06:46.099 Test: blockdev nvme passthru rw ...passed 00:06:46.099 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.099 Test: blockdev nvme admin passthru ...[2024-12-04 14:07:47.394771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.099 [2024-12-04 14:07:47.394797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.099 passed 00:06:46.099 Test: blockdev copy ...passed 00:06:46.099 Suite: bdevio tests on: Nvme2n2 00:06:46.099 Test: blockdev write read block ...passed 00:06:46.099 Test: blockdev write zeroes read block ...passed 00:06:46.099 Test: blockdev write zeroes read no split ...passed 00:06:46.099 Test: blockdev write zeroes read split ...passed 00:06:46.099 Test: blockdev write zeroes read split partial ...passed 00:06:46.099 Test: blockdev reset ...[2024-12-04 14:07:47.452347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:06:46.099 [2024-12-04 14:07:47.454971] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.099 passed 00:06:46.099 Test: blockdev write read 8 blocks ...passed 00:06:46.099 Test: blockdev write read size > 128k ...passed 00:06:46.099 Test: blockdev write read invalid size ...passed 00:06:46.099 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.099 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.099 Test: blockdev write read max offset ...passed 00:06:46.099 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.099 Test: blockdev writev readv 8 blocks ...passed 00:06:46.099 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.099 Test: blockdev writev readv block ...passed 00:06:46.099 Test: blockdev writev readv size > 128k ...passed 00:06:46.099 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.099 Test: blockdev comparev and writev ...[2024-12-04 14:07:47.461880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:46.099 Test: blockdev nvme passthru rw ...passed 00:06:46.099 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x27a006000 len:0x1000 00:06:46.099 [2024-12-04 14:07:47.462001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.099 [2024-12-04 14:07:47.462602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.099 [2024-12-04 14:07:47.462627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.099 passed 00:06:46.099 Test: blockdev nvme admin passthru ...passed 00:06:46.099 Test: blockdev copy ...passed 00:06:46.099 Suite: bdevio tests on: Nvme2n1 00:06:46.099 Test: blockdev write read block ...passed 00:06:46.099 Test: blockdev write zeroes read block ...passed 00:06:46.099 Test: blockdev write zeroes read no split ...passed 00:06:46.099 Test: blockdev write zeroes read split ...passed 00:06:46.099 Test: blockdev write zeroes read split partial ...passed 00:06:46.099 Test: blockdev reset ...[2024-12-04 14:07:47.530052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:06:46.099 [2024-12-04 14:07:47.532527] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.099 passed 00:06:46.099 Test: blockdev write read 8 blocks ...passed 00:06:46.099 Test: blockdev write read size > 128k ...passed 00:06:46.099 Test: blockdev write read invalid size ...passed 00:06:46.099 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.099 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.099 Test: blockdev write read max offset ...passed 00:06:46.099 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.099 Test: blockdev writev readv 8 blocks ...passed 00:06:46.099 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.099 Test: blockdev writev readv block ...passed 00:06:46.099 Test: blockdev writev readv size > 128k ...passed 00:06:46.099 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.099 Test: blockdev comparev and writev ...[2024-12-04 14:07:47.539424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27a001000 len:0x1000 00:06:46.099 [2024-12-04 14:07:47.539457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.099 passed 00:06:46.099 Test: blockdev nvme passthru rw ...passed 00:06:46.099 Test: blockdev nvme passthru vendor specific ...[2024-12-04 14:07:47.540177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:46.099 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:46.099 [2024-12-04 14:07:47.540307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.099 passed 00:06:46.099 Test: blockdev copy ...passed 00:06:46.099 Suite: bdevio tests on: Nvme1n1 00:06:46.099 Test: blockdev write read block ...passed 00:06:46.099 Test: blockdev write zeroes read block ...passed 00:06:46.099 Test: blockdev write zeroes read no split ...passed 00:06:46.359 Test: blockdev write zeroes read split ...passed 00:06:46.359 Test: blockdev write zeroes read split partial ...passed 00:06:46.359 Test: blockdev reset ...[2024-12-04 14:07:47.604576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:06:46.359 [2024-12-04 14:07:47.607016] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.359 passed 00:06:46.359 Test: blockdev write read 8 blocks ...passed 00:06:46.359 Test: blockdev write read size > 128k ...passed 00:06:46.359 Test: blockdev write read invalid size ...passed 00:06:46.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.359 Test: blockdev write read max offset ...passed 00:06:46.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.359 Test: blockdev writev readv 8 blocks ...passed 00:06:46.359 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.359 Test: blockdev writev readv block ...passed 00:06:46.359 Test: blockdev writev readv size > 128k ...passed 00:06:46.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.359 Test: blockdev comparev and writev ...[2024-12-04 14:07:47.613951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:46.359 Test: blockdev nvme passthru rw ...passed 00:06:46.359 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x26ae06000 len:0x1000 00:06:46.359 [2024-12-04 14:07:47.614068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.359 passed 00:06:46.359 Test: blockdev nvme admin passthru ...[2024-12-04 14:07:47.614677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:46.359 [2024-12-04 14:07:47.614705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.359 passed 00:06:46.359 Test: blockdev copy ...passed 00:06:46.359 Suite: bdevio tests on: Nvme0n1 00:06:46.359 Test: blockdev write read block ...passed 00:06:46.359 Test: blockdev write zeroes read block ...passed 00:06:46.359 Test: blockdev write zeroes read no split ...passed 00:06:46.359 Test: blockdev write zeroes read split ...passed 00:06:46.359 Test: blockdev write zeroes read split partial ...passed 00:06:46.359 Test: blockdev reset ...[2024-12-04 14:07:47.656516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:06:46.359 [2024-12-04 14:07:47.658824] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:46.359 passed 00:06:46.359 Test: blockdev write read 8 blocks ...passed 00:06:46.359 Test: blockdev write read size > 128k ...passed 00:06:46.359 Test: blockdev write read invalid size ...passed 00:06:46.359 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.359 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.359 Test: blockdev write read max offset ...passed 00:06:46.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.359 Test: blockdev writev readv 8 blocks ...passed 00:06:46.359 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.359 Test: blockdev writev readv block ...passed 00:06:46.359 Test: blockdev writev readv size > 128k ...passed 00:06:46.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.359 Test: blockdev comparev and writev ...[2024-12-04 14:07:47.665856] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:46.359 separate metadata which is not supported yet. 00:06:46.359 passed 00:06:46.359 Test: blockdev nvme passthru rw ...passed 00:06:46.359 Test: blockdev nvme passthru vendor specific ...[2024-12-04 14:07:47.666628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:46.359 [2024-12-04 14:07:47.666749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:06:46.359 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:06:46.359 passed 00:06:46.359 Test: blockdev copy ...passed 00:06:46.359 00:06:46.359 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.359 suites 6 6 n/a 0 0 00:06:46.359 tests 138 138 138 0 0 00:06:46.359 asserts 893 893 893 0 n/a 00:06:46.359 00:06:46.359 Elapsed time = 1.103 seconds 00:06:46.359 0 00:06:46.359 14:07:47 -- bdev/blockdev.sh@293 -- # killprocess 60265 00:06:46.359 14:07:47 -- common/autotest_common.sh@936 -- # '[' -z 60265 ']' 00:06:46.359 14:07:47 -- common/autotest_common.sh@940 -- # kill -0 60265 00:06:46.359 14:07:47 -- common/autotest_common.sh@941 -- # uname 00:06:46.359 14:07:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.359 14:07:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60265 00:06:46.359 killing process with pid 60265 00:06:46.359 14:07:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.359 14:07:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.359 14:07:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60265' 00:06:46.359 14:07:47 -- common/autotest_common.sh@955 -- # kill 60265 00:06:46.359 14:07:47 -- common/autotest_common.sh@960 -- # wait 60265 00:06:46.942 14:07:48 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:06:46.942 00:06:46.942 real 0m1.980s 00:06:46.942 user 0m4.869s 00:06:46.942 sys 0m0.265s 00:06:46.942 14:07:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.942 ************************************ 00:06:46.942 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.942 END TEST bdev_bounds 00:06:46.942 ************************************ 00:06:46.942 14:07:48 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.942 14:07:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:46.942 14:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.942 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.942 ************************************ 00:06:46.942 START TEST bdev_nbd 00:06:46.942 ************************************ 00:06:46.942 14:07:48 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.942 14:07:48 -- bdev/blockdev.sh@298 -- # uname -s 00:06:46.942 14:07:48 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:06:46.942 14:07:48 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.942 14:07:48 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.942 14:07:48 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.942 14:07:48 -- bdev/blockdev.sh@302 -- # local bdev_all 00:06:46.942 14:07:48 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:06:46.942 14:07:48 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:06:46.942 14:07:48 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:46.942 14:07:48 -- bdev/blockdev.sh@309 -- # local nbd_all 00:06:46.942 14:07:48 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:06:46.942 14:07:48 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.942 14:07:48 -- bdev/blockdev.sh@312 -- # local nbd_list 00:06:46.942 14:07:48 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.942 14:07:48 -- bdev/blockdev.sh@313 -- # local bdev_list 00:06:46.942 14:07:48 -- bdev/blockdev.sh@316 -- # nbd_pid=60319 00:06:46.942 14:07:48 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:46.942 14:07:48 -- bdev/blockdev.sh@318 -- # waitforlisten 60319 /var/tmp/spdk-nbd.sock 00:06:46.942 14:07:48 -- common/autotest_common.sh@829 -- # '[' -z 60319 ']' 00:06:46.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.942 14:07:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.942 14:07:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.943 14:07:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.943 14:07:48 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:46.943 14:07:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.943 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.943 [2024-12-04 14:07:48.374287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.943 [2024-12-04 14:07:48.374371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.211 [2024-12-04 14:07:48.512974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.211 [2024-12-04 14:07:48.654765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.778 14:07:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.778 14:07:49 -- common/autotest_common.sh@862 -- # return 0 00:06:47.778 14:07:49 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@24 -- # local i 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.778 14:07:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:48.036 14:07:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:48.036 14:07:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:48.036 14:07:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:48.036 14:07:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:48.036 14:07:49 -- common/autotest_common.sh@867 -- # local i 00:06:48.036 14:07:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.036 14:07:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.036 14:07:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:48.036 14:07:49 -- common/autotest_common.sh@871 -- # break 00:06:48.036 14:07:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.036 14:07:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.036 14:07:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.036 1+0 records in 00:06:48.036 1+0 records out 00:06:48.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353386 s, 11.6 MB/s 00:06:48.036 14:07:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.036 14:07:49 -- common/autotest_common.sh@884 -- # size=4096 00:06:48.036 14:07:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.036 14:07:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.036 14:07:49 -- common/autotest_common.sh@887 -- # return 0 00:06:48.036 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.036 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.036 14:07:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:48.294 14:07:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:48.294 14:07:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:48.294 14:07:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:48.294 14:07:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:48.294 14:07:49 -- common/autotest_common.sh@867 -- # local i 00:06:48.294 14:07:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.294 14:07:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.294 14:07:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:48.294 14:07:49 -- common/autotest_common.sh@871 -- # break 00:06:48.294 14:07:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.294 14:07:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.294 14:07:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.294 1+0 records in 00:06:48.294 1+0 records out 00:06:48.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327431 s, 12.5 MB/s 00:06:48.294 14:07:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.294 14:07:49 -- common/autotest_common.sh@884 -- # size=4096 00:06:48.294 14:07:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.294 14:07:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.294 14:07:49 -- common/autotest_common.sh@887 -- # return 0 00:06:48.294 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.294 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.294 14:07:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:48.552 14:07:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:48.552 14:07:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:48.553 14:07:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:48.553 14:07:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:06:48.553 14:07:49 -- common/autotest_common.sh@867 -- # local i 00:06:48.553 14:07:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.553 14:07:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.553 14:07:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:06:48.553 14:07:49 -- common/autotest_common.sh@871 -- # break 00:06:48.553 14:07:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.553 14:07:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.553 14:07:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.553 1+0 records in 00:06:48.553 1+0 records out 00:06:48.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394016 s, 10.4 MB/s 00:06:48.553 14:07:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.553 14:07:49 -- common/autotest_common.sh@884 -- # size=4096 00:06:48.553 14:07:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.553 14:07:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.553 14:07:49 -- common/autotest_common.sh@887 -- # return 0 00:06:48.553 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.553 14:07:49 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.553 14:07:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:48.812 14:07:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:48.812 14:07:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:48.812 14:07:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:48.812 14:07:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:06:48.812 14:07:50 -- common/autotest_common.sh@867 -- # local i 00:06:48.812 14:07:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.812 14:07:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.812 14:07:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:06:48.812 14:07:50 -- common/autotest_common.sh@871 -- # break 00:06:48.812 14:07:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.812 14:07:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.812 14:07:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.812 1+0 records in 00:06:48.812 1+0 records out 00:06:48.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287673 s, 14.2 MB/s 00:06:48.812 14:07:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.812 14:07:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:48.812 14:07:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.812 14:07:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.812 14:07:50 -- common/autotest_common.sh@887 -- # return 0 00:06:48.812 14:07:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.812 14:07:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:48.812 14:07:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:49.071 14:07:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:06:49.071 14:07:50 -- common/autotest_common.sh@867 -- # local i 00:06:49.071 14:07:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:06:49.071 14:07:50 -- common/autotest_common.sh@871 -- # break 00:06:49.071 14:07:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.071 1+0 records in 00:06:49.071 1+0 records out 00:06:49.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402937 s, 10.2 MB/s 00:06:49.071 14:07:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.071 14:07:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:49.071 14:07:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.071 14:07:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.071 14:07:50 -- common/autotest_common.sh@887 -- # return 0 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:49.071 14:07:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:06:49.071 14:07:50 -- common/autotest_common.sh@867 -- # local i 00:06:49.071 14:07:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:06:49.071 14:07:50 -- common/autotest_common.sh@871 -- # break 00:06:49.071 14:07:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.071 14:07:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.071 1+0 records in 00:06:49.071 1+0 records out 00:06:49.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296905 s, 13.8 MB/s 00:06:49.071 14:07:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.071 14:07:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:49.071 14:07:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.071 14:07:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.071 14:07:50 -- common/autotest_common.sh@887 -- # return 0 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:49.071 14:07:50 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.330 14:07:50 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd0", 00:06:49.331 "bdev_name": "Nvme0n1" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd1", 00:06:49.331 "bdev_name": "Nvme1n1" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd2", 00:06:49.331 "bdev_name": "Nvme2n1" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd3", 00:06:49.331 "bdev_name": "Nvme2n2" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd4", 00:06:49.331 "bdev_name": "Nvme2n3" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd5", 00:06:49.331 "bdev_name": "Nvme3n1" 00:06:49.331 } 00:06:49.331 ]' 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd0", 00:06:49.331 "bdev_name": "Nvme0n1" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd1", 00:06:49.331 "bdev_name": "Nvme1n1" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd2", 00:06:49.331 "bdev_name": "Nvme2n1" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd3", 00:06:49.331 "bdev_name": "Nvme2n2" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd4", 00:06:49.331 "bdev_name": "Nvme2n3" 00:06:49.331 }, 00:06:49.331 { 00:06:49.331 "nbd_device": "/dev/nbd5", 00:06:49.331 "bdev_name": "Nvme3n1" 00:06:49.331 } 00:06:49.331 ]' 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.331 14:07:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@41 -- # break 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.590 14:07:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@41 -- # break 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.849 14:07:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@41 -- # break 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@41 -- # break 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.109 14:07:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:50.368 14:07:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:50.368 14:07:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:50.368 14:07:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@41 -- # break 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.369 14:07:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@41 -- # break 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.627 14:07:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@65 -- # true 00:06:50.886 14:07:52 -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@122 -- # count=0 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@127 -- # return 0 00:06:50.887 14:07:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.887 14:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:50.887 /dev/nbd0 00:06:51.144 14:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.144 14:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.144 14:07:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.144 14:07:52 -- common/autotest_common.sh@867 -- # local i 00:06:51.144 14:07:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.144 14:07:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.144 14:07:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.144 14:07:52 -- common/autotest_common.sh@871 -- # break 00:06:51.144 14:07:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.144 14:07:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.144 14:07:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.144 1+0 records in 00:06:51.144 1+0 records out 00:06:51.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671946 s, 6.1 MB/s 00:06:51.144 14:07:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.144 14:07:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.144 14:07:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.144 14:07:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.144 14:07:52 -- common/autotest_common.sh@887 -- # return 0 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:51.145 /dev/nbd1 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.145 14:07:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:51.145 14:07:52 -- common/autotest_common.sh@867 -- # local i 00:06:51.145 14:07:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.145 14:07:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.145 14:07:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:51.145 14:07:52 -- common/autotest_common.sh@871 -- # break 00:06:51.145 14:07:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.145 14:07:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.145 14:07:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.145 1+0 records in 00:06:51.145 1+0 records out 00:06:51.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317678 s, 12.9 MB/s 00:06:51.145 14:07:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.145 14:07:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.145 14:07:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.145 14:07:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.145 14:07:52 -- common/autotest_common.sh@887 -- # return 0 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.145 14:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:51.402 /dev/nbd10 00:06:51.402 14:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:51.402 14:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:51.402 14:07:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:06:51.402 14:07:52 -- common/autotest_common.sh@867 -- # local i 00:06:51.402 14:07:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.402 14:07:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.402 14:07:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:06:51.402 14:07:52 -- common/autotest_common.sh@871 -- # break 00:06:51.402 14:07:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.402 14:07:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.402 14:07:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.402 1+0 records in 00:06:51.402 1+0 records out 00:06:51.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319001 s, 12.8 MB/s 00:06:51.402 14:07:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.402 14:07:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.402 14:07:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.402 14:07:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.402 14:07:52 -- common/autotest_common.sh@887 -- # return 0 00:06:51.402 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.402 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.402 14:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:51.659 /dev/nbd11 00:06:51.659 14:07:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:51.659 14:07:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:51.659 14:07:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:06:51.659 14:07:52 -- common/autotest_common.sh@867 -- # local i 00:06:51.659 14:07:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.659 14:07:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.659 14:07:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:06:51.659 14:07:52 -- common/autotest_common.sh@871 -- # break 00:06:51.659 14:07:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.659 14:07:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.659 14:07:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.659 1+0 records in 00:06:51.659 1+0 records out 00:06:51.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035297 s, 11.6 MB/s 00:06:51.659 14:07:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.659 14:07:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.659 14:07:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.659 14:07:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.659 14:07:52 -- common/autotest_common.sh@887 -- # return 0 00:06:51.659 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.659 14:07:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.659 14:07:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:51.919 /dev/nbd12 00:06:51.919 14:07:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:51.919 14:07:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:51.919 14:07:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:06:51.919 14:07:53 -- common/autotest_common.sh@867 -- # local i 00:06:51.919 14:07:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.920 14:07:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.920 14:07:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:06:51.920 14:07:53 -- common/autotest_common.sh@871 -- # break 00:06:51.920 14:07:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.920 14:07:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.920 14:07:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.920 1+0 records in 00:06:51.920 1+0 records out 00:06:51.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303454 s, 13.5 MB/s 00:06:51.920 14:07:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.920 14:07:53 -- common/autotest_common.sh@884 -- # size=4096 00:06:51.920 14:07:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.920 14:07:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.920 14:07:53 -- common/autotest_common.sh@887 -- # return 0 00:06:51.920 14:07:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.920 14:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:51.920 14:07:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:52.178 /dev/nbd13 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:52.178 14:07:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:06:52.178 14:07:53 -- common/autotest_common.sh@867 -- # local i 00:06:52.178 14:07:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.178 14:07:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.178 14:07:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:06:52.178 14:07:53 -- common/autotest_common.sh@871 -- # break 00:06:52.178 14:07:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.178 14:07:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.178 14:07:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.178 1+0 records in 00:06:52.178 1+0 records out 00:06:52.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378261 s, 10.8 MB/s 00:06:52.178 14:07:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.178 14:07:53 -- common/autotest_common.sh@884 -- # size=4096 00:06:52.178 14:07:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.178 14:07:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.178 14:07:53 -- common/autotest_common.sh@887 -- # return 0 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd0", 00:06:52.178 "bdev_name": "Nvme0n1" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd1", 00:06:52.178 "bdev_name": "Nvme1n1" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd10", 00:06:52.178 "bdev_name": "Nvme2n1" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd11", 00:06:52.178 "bdev_name": "Nvme2n2" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd12", 00:06:52.178 "bdev_name": "Nvme2n3" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd13", 00:06:52.178 "bdev_name": "Nvme3n1" 00:06:52.178 } 00:06:52.178 ]' 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd0", 00:06:52.178 "bdev_name": "Nvme0n1" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd1", 00:06:52.178 "bdev_name": "Nvme1n1" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd10", 00:06:52.178 "bdev_name": "Nvme2n1" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd11", 00:06:52.178 "bdev_name": "Nvme2n2" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd12", 00:06:52.178 "bdev_name": "Nvme2n3" 00:06:52.178 }, 00:06:52.178 { 00:06:52.178 "nbd_device": "/dev/nbd13", 00:06:52.178 "bdev_name": "Nvme3n1" 00:06:52.178 } 00:06:52.178 ]' 00:06:52.178 14:07:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.178 /dev/nbd1 00:06:52.178 /dev/nbd10 00:06:52.178 /dev/nbd11 00:06:52.178 /dev/nbd12 00:06:52.178 /dev/nbd13' 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.436 /dev/nbd1 00:06:52.436 /dev/nbd10 00:06:52.436 /dev/nbd11 00:06:52.436 /dev/nbd12 00:06:52.436 /dev/nbd13' 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@65 -- # count=6 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@66 -- # echo 6 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@95 -- # count=6 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:52.436 256+0 records in 00:06:52.436 256+0 records out 00:06:52.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059919 s, 175 MB/s 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.436 14:07:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.437 256+0 records in 00:06:52.437 256+0 records out 00:06:52.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0583321 s, 18.0 MB/s 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.437 256+0 records in 00:06:52.437 256+0 records out 00:06:52.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0529224 s, 19.8 MB/s 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:52.437 256+0 records in 00:06:52.437 256+0 records out 00:06:52.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0529705 s, 19.8 MB/s 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:52.437 256+0 records in 00:06:52.437 256+0 records out 00:06:52.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0592723 s, 17.7 MB/s 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.437 14:07:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:52.694 256+0 records in 00:06:52.694 256+0 records out 00:06:52.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0524141 s, 20.0 MB/s 00:06:52.694 14:07:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:52.694 256+0 records in 00:06:52.694 256+0 records out 00:06:52.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0533403 s, 19.7 MB/s 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@51 -- # local i 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.694 14:07:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@41 -- # break 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.951 14:07:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@41 -- # break 00:06:53.209 14:07:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@41 -- # break 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.210 14:07:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:53.467 14:07:54 -- bdev/nbd_common.sh@41 -- # break 00:06:53.468 14:07:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.468 14:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.468 14:07:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@41 -- # break 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.724 14:07:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@41 -- # break 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.980 14:07:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@65 -- # true 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.236 14:07:55 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:54.236 malloc_lvol_verify 00:06:54.236 14:07:55 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:54.494 415ec015-2b32-4b14-bd4e-6cdfd7bb0a59 00:06:54.494 14:07:55 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:54.752 c04355e1-7c00-40ff-ac32-c5aedf5a3fb7 00:06:54.752 14:07:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:54.752 /dev/nbd0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:06:55.011 mke2fs 1.47.0 (5-Feb-2023) 00:06:55.011 Discarding device blocks: 0/4096 done 00:06:55.011 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:55.011 00:06:55.011 Allocating group tables: 0/1 done 00:06:55.011 Writing inode tables: 0/1 done 00:06:55.011 Creating journal (1024 blocks): done 00:06:55.011 Writing superblocks and filesystem accounting information: 0/1 done 00:06:55.011 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@51 -- # local i 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@41 -- # break 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:06:55.011 14:07:56 -- bdev/nbd_common.sh@147 -- # return 0 00:06:55.011 14:07:56 -- bdev/blockdev.sh@324 -- # killprocess 60319 00:06:55.011 14:07:56 -- common/autotest_common.sh@936 -- # '[' -z 60319 ']' 00:06:55.011 14:07:56 -- common/autotest_common.sh@940 -- # kill -0 60319 00:06:55.011 14:07:56 -- common/autotest_common.sh@941 -- # uname 00:06:55.011 14:07:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:55.011 14:07:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60319 00:06:55.011 14:07:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:55.011 14:07:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:55.011 killing process with pid 60319 00:06:55.011 14:07:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60319' 00:06:55.011 14:07:56 -- common/autotest_common.sh@955 -- # kill 60319 00:06:55.011 14:07:56 -- common/autotest_common.sh@960 -- # wait 60319 00:06:55.945 14:07:57 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:06:55.945 00:06:55.945 real 0m8.814s 00:06:55.945 user 0m12.770s 00:06:55.945 sys 0m2.688s 00:06:55.945 14:07:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.945 ************************************ 00:06:55.945 END TEST bdev_nbd 00:06:55.945 ************************************ 00:06:55.945 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:06:55.945 14:07:57 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:06:55.945 14:07:57 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:06:55.945 skipping fio tests on NVMe due to multi-ns failures. 00:06:55.945 14:07:57 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:55.945 14:07:57 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:55.945 14:07:57 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:55.945 14:07:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:06:55.945 14:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.945 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:06:55.945 ************************************ 00:06:55.945 START TEST bdev_verify 00:06:55.945 ************************************ 00:06:55.945 14:07:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:55.945 [2024-12-04 14:07:57.256542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.945 [2024-12-04 14:07:57.256657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:06:55.945 [2024-12-04 14:07:57.401731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.204 [2024-12-04 14:07:57.546436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.204 [2024-12-04 14:07:57.546510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.777 Running I/O for 5 seconds... 00:07:02.072 00:07:02.072 Latency(us) 00:07:02.072 [2024-12-04T14:08:03.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x0 length 0xbd0bd 00:07:02.072 Nvme0n1 : 5.05 2320.90 9.07 0.00 0.00 54976.52 10233.70 60091.47 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:02.072 Nvme0n1 : 5.05 2362.58 9.23 0.00 0.00 54053.64 5469.74 61704.66 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x0 length 0xa0000 00:07:02.072 Nvme1n1 : 5.05 2320.25 9.06 0.00 0.00 54957.72 10687.41 58881.58 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0xa0000 length 0xa0000 00:07:02.072 Nvme1n1 : 5.06 2367.06 9.25 0.00 0.00 53870.96 4663.14 55655.19 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x0 length 0x80000 00:07:02.072 Nvme2n1 : 5.06 2326.17 9.09 0.00 0.00 54691.08 3150.77 54041.99 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x80000 length 0x80000 00:07:02.072 Nvme2n1 : 5.06 2365.60 9.24 0.00 0.00 53773.37 6604.01 54445.29 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x0 length 0x80000 00:07:02.072 Nvme2n2 : 5.06 2325.46 9.08 0.00 0.00 54661.97 3806.13 53638.70 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x80000 length 0x80000 00:07:02.072 Nvme2n2 : 5.06 2364.07 9.23 0.00 0.00 53722.72 8922.98 55655.19 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.072 Verification LBA range: start 0x0 length 0x80000 00:07:02.072 Nvme2n3 : 5.06 2324.06 9.08 0.00 0.00 54623.76 5898.24 54041.99 00:07:02.072 [2024-12-04T14:08:03.537Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.073 Verification LBA range: start 0x80000 length 0x80000 00:07:02.073 Nvme2n3 : 5.07 2362.60 9.23 0.00 0.00 53708.80 11040.30 56461.78 00:07:02.073 [2024-12-04T14:08:03.538Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:02.073 Verification LBA range: start 0x0 length 0x20000 00:07:02.073 Nvme3n1 : 5.06 2322.60 9.07 0.00 0.00 54603.33 8166.79 52428.80 00:07:02.073 [2024-12-04T14:08:03.538Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:02.073 Verification LBA range: start 0x20000 length 0x20000 00:07:02.073 Nvme3n1 : 5.07 2362.04 9.23 0.00 0.00 53654.57 11090.71 56865.08 00:07:02.073 [2024-12-04T14:08:03.538Z] =================================================================================================================== 00:07:02.073 [2024-12-04T14:08:03.538Z] Total : 28123.39 109.86 0.00 0.00 54270.26 3150.77 61704.66 00:07:16.977 00:07:16.977 real 0m20.445s 00:07:16.977 user 0m39.604s 00:07:16.977 sys 0m0.327s 00:07:16.977 14:08:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.977 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:16.977 ************************************ 00:07:16.977 END TEST bdev_verify 00:07:16.977 ************************************ 00:07:16.977 14:08:17 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.977 14:08:17 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:16.977 14:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.977 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:16.977 ************************************ 00:07:16.977 START TEST bdev_verify_big_io 00:07:16.977 ************************************ 00:07:16.977 14:08:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.977 [2024-12-04 14:08:17.759569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.977 [2024-12-04 14:08:17.759676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60876 ] 00:07:16.977 [2024-12-04 14:08:17.907741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.977 [2024-12-04 14:08:18.083416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.977 [2024-12-04 14:08:18.083485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.543 Running I/O for 5 seconds... 00:07:22.807 00:07:22.807 Latency(us) 00:07:22.807 [2024-12-04T14:08:24.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x0 length 0xbd0b 00:07:22.807 Nvme0n1 : 5.33 294.07 18.38 0.00 0.00 424601.35 71383.83 787238.60 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:22.807 Nvme0n1 : 5.33 311.17 19.45 0.00 0.00 403074.51 48194.17 590428.95 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x0 length 0xa000 00:07:22.807 Nvme1n1 : 5.37 299.90 18.74 0.00 0.00 411326.86 37910.06 716258.07 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0xa000 length 0xa000 00:07:22.807 Nvme1n1 : 5.33 311.09 19.44 0.00 0.00 398289.99 48597.46 542033.13 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x0 length 0x8000 00:07:22.807 Nvme2n1 : 5.37 299.83 18.74 0.00 0.00 404385.86 38313.35 645277.54 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x8000 length 0x8000 00:07:22.807 Nvme2n1 : 5.36 318.62 19.91 0.00 0.00 387207.84 22786.36 490410.93 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x0 length 0x8000 00:07:22.807 Nvme2n2 : 5.39 316.42 19.78 0.00 0.00 380157.81 8872.57 577523.40 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x8000 length 0x8000 00:07:22.807 Nvme2n2 : 5.36 318.53 19.91 0.00 0.00 382524.72 23391.31 442015.11 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x0 length 0x8000 00:07:22.807 Nvme2n3 : 5.41 323.33 20.21 0.00 0.00 365751.82 9628.75 506542.87 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x8000 length 0x8000 00:07:22.807 Nvme2n3 : 5.36 325.92 20.37 0.00 0.00 370820.79 2445.00 392006.10 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x0 length 0x2000 00:07:22.807 Nvme3n1 : 5.45 381.66 23.85 0.00 0.00 305432.93 319.80 435562.34 00:07:22.807 [2024-12-04T14:08:24.272Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:22.807 Verification LBA range: start 0x2000 length 0x2000 00:07:22.807 Nvme3n1 : 5.37 332.91 20.81 0.00 0.00 358952.76 2621.44 356515.84 00:07:22.807 [2024-12-04T14:08:24.272Z] =================================================================================================================== 00:07:22.807 [2024-12-04T14:08:24.272Z] Total : 3833.47 239.59 0.00 0.00 380551.62 319.80 787238.60 00:07:24.703 00:07:24.703 real 0m8.453s 00:07:24.703 user 0m15.900s 00:07:24.703 sys 0m0.232s 00:07:24.703 14:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.703 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.703 ************************************ 00:07:24.703 END TEST bdev_verify_big_io 00:07:24.703 ************************************ 00:07:24.962 14:08:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.962 14:08:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:24.962 14:08:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.962 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:24.962 ************************************ 00:07:24.962 START TEST bdev_write_zeroes 00:07:24.962 ************************************ 00:07:24.962 14:08:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.962 [2024-12-04 14:08:26.272306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.962 [2024-12-04 14:08:26.272387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60991 ] 00:07:24.962 [2024-12-04 14:08:26.412859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.220 [2024-12-04 14:08:26.550559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.787 Running I/O for 1 seconds... 00:07:26.725 00:07:26.725 Latency(us) 00:07:26.725 [2024-12-04T14:08:28.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.725 [2024-12-04T14:08:28.190Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.725 Nvme0n1 : 1.01 12076.96 47.18 0.00 0.00 10572.04 5469.74 22080.59 00:07:26.725 [2024-12-04T14:08:28.190Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.725 Nvme1n1 : 1.01 12061.97 47.12 0.00 0.00 10575.41 7108.14 18350.08 00:07:26.725 [2024-12-04T14:08:28.190Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.725 Nvme2n1 : 1.01 12048.34 47.06 0.00 0.00 10569.60 6956.90 18047.61 00:07:26.725 [2024-12-04T14:08:28.190Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.725 Nvme2n2 : 1.02 12034.79 47.01 0.00 0.00 10543.62 6755.25 17140.18 00:07:26.725 [2024-12-04T14:08:28.190Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.725 Nvme2n3 : 1.02 12075.51 47.17 0.00 0.00 10504.53 6553.60 16938.54 00:07:26.725 [2024-12-04T14:08:28.190Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.725 Nvme3n1 : 1.02 12061.95 47.12 0.00 0.00 10495.69 6553.60 17341.83 00:07:26.725 [2024-12-04T14:08:28.190Z] =================================================================================================================== 00:07:26.725 [2024-12-04T14:08:28.190Z] Total : 72359.51 282.65 0.00 0.00 10543.40 5469.74 22080.59 00:07:27.667 00:07:27.667 real 0m2.720s 00:07:27.667 user 0m2.441s 00:07:27.667 sys 0m0.161s 00:07:27.667 14:08:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.667 ************************************ 00:07:27.667 END TEST bdev_write_zeroes 00:07:27.667 ************************************ 00:07:27.667 14:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.667 14:08:28 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.667 14:08:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:27.667 14:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.667 14:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.667 ************************************ 00:07:27.667 START TEST bdev_json_nonenclosed 00:07:27.667 ************************************ 00:07:27.668 14:08:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.668 [2024-12-04 14:08:29.078874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.668 [2024-12-04 14:08:29.079016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61038 ] 00:07:27.929 [2024-12-04 14:08:29.228372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.191 [2024-12-04 14:08:29.451105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.191 [2024-12-04 14:08:29.451324] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:28.191 [2024-12-04 14:08:29.451353] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.452 00:07:28.452 real 0m0.750s 00:07:28.452 user 0m0.522s 00:07:28.452 sys 0m0.119s 00:07:28.452 14:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.452 ************************************ 00:07:28.452 END TEST bdev_json_nonenclosed 00:07:28.452 ************************************ 00:07:28.452 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.452 14:08:29 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.452 14:08:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:28.452 14:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.452 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:28.452 ************************************ 00:07:28.452 START TEST bdev_json_nonarray 00:07:28.452 ************************************ 00:07:28.452 14:08:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.452 [2024-12-04 14:08:29.884051] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.452 [2024-12-04 14:08:29.884201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:07:28.714 [2024-12-04 14:08:30.032870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.989 [2024-12-04 14:08:30.271035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.989 [2024-12-04 14:08:30.271265] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:28.989 [2024-12-04 14:08:30.271295] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.251 00:07:29.251 real 0m0.737s 00:07:29.251 user 0m0.501s 00:07:29.251 sys 0m0.129s 00:07:29.251 14:08:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.251 ************************************ 00:07:29.251 END TEST bdev_json_nonarray 00:07:29.251 ************************************ 00:07:29.251 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.251 14:08:30 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:07:29.251 14:08:30 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:07:29.251 14:08:30 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:07:29.251 14:08:30 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:07:29.251 14:08:30 -- bdev/blockdev.sh@809 -- # cleanup 00:07:29.251 14:08:30 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.251 14:08:30 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.251 14:08:30 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:07:29.251 14:08:30 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:07:29.251 14:08:30 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:07:29.251 14:08:30 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:07:29.251 00:07:29.251 real 0m48.488s 00:07:29.251 user 1m20.564s 00:07:29.251 sys 0m4.777s 00:07:29.251 14:08:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.251 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.251 ************************************ 00:07:29.251 END TEST blockdev_nvme 00:07:29.251 ************************************ 00:07:29.251 14:08:30 -- spdk/autotest.sh@206 -- # uname -s 00:07:29.251 14:08:30 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:07:29.251 14:08:30 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:29.251 14:08:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:29.251 14:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.251 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.251 ************************************ 00:07:29.251 START TEST blockdev_nvme_gpt 00:07:29.251 ************************************ 00:07:29.251 14:08:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:29.514 * Looking for test storage... 00:07:29.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:29.514 14:08:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:29.514 14:08:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:29.514 14:08:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:29.514 14:08:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:29.514 14:08:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:29.514 14:08:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:29.514 14:08:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:29.514 14:08:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:29.514 14:08:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:29.514 14:08:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.514 14:08:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:29.514 14:08:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:29.514 14:08:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:29.514 14:08:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:29.514 14:08:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:29.514 14:08:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:29.514 14:08:30 -- scripts/common.sh@344 -- # : 1 00:07:29.514 14:08:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:29.514 14:08:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.514 14:08:30 -- scripts/common.sh@364 -- # decimal 1 00:07:29.514 14:08:30 -- scripts/common.sh@352 -- # local d=1 00:07:29.514 14:08:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.514 14:08:30 -- scripts/common.sh@354 -- # echo 1 00:07:29.514 14:08:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:29.514 14:08:30 -- scripts/common.sh@365 -- # decimal 2 00:07:29.514 14:08:30 -- scripts/common.sh@352 -- # local d=2 00:07:29.514 14:08:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.514 14:08:30 -- scripts/common.sh@354 -- # echo 2 00:07:29.514 14:08:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:29.514 14:08:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:29.514 14:08:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:29.514 14:08:30 -- scripts/common.sh@367 -- # return 0 00:07:29.514 14:08:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.514 14:08:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:29.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.514 --rc genhtml_branch_coverage=1 00:07:29.514 --rc genhtml_function_coverage=1 00:07:29.514 --rc genhtml_legend=1 00:07:29.514 --rc geninfo_all_blocks=1 00:07:29.514 --rc geninfo_unexecuted_blocks=1 00:07:29.514 00:07:29.514 ' 00:07:29.514 14:08:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:29.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.514 --rc genhtml_branch_coverage=1 00:07:29.514 --rc genhtml_function_coverage=1 00:07:29.514 --rc genhtml_legend=1 00:07:29.514 --rc geninfo_all_blocks=1 00:07:29.514 --rc geninfo_unexecuted_blocks=1 00:07:29.514 00:07:29.514 ' 00:07:29.514 14:08:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:29.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.514 --rc genhtml_branch_coverage=1 00:07:29.514 --rc genhtml_function_coverage=1 00:07:29.514 --rc genhtml_legend=1 00:07:29.514 --rc geninfo_all_blocks=1 00:07:29.514 --rc geninfo_unexecuted_blocks=1 00:07:29.514 00:07:29.514 ' 00:07:29.514 14:08:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:29.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.514 --rc genhtml_branch_coverage=1 00:07:29.514 --rc genhtml_function_coverage=1 00:07:29.514 --rc genhtml_legend=1 00:07:29.514 --rc geninfo_all_blocks=1 00:07:29.514 --rc geninfo_unexecuted_blocks=1 00:07:29.514 00:07:29.514 ' 00:07:29.514 14:08:30 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:29.514 14:08:30 -- bdev/nbd_common.sh@6 -- # set -e 00:07:29.514 14:08:30 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:29.514 14:08:30 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.514 14:08:30 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:29.514 14:08:30 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:29.514 14:08:30 -- bdev/blockdev.sh@18 -- # : 00:07:29.514 14:08:30 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:29.514 14:08:30 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:29.514 14:08:30 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:29.514 14:08:30 -- bdev/blockdev.sh@672 -- # uname -s 00:07:29.514 14:08:30 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:29.514 14:08:30 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:29.514 14:08:30 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:07:29.514 14:08:30 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:29.514 14:08:30 -- bdev/blockdev.sh@682 -- # dek= 00:07:29.514 14:08:30 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:29.514 14:08:30 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:29.514 14:08:30 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:29.514 14:08:30 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:07:29.514 14:08:30 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:07:29.514 14:08:30 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:29.514 14:08:30 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61147 00:07:29.514 14:08:30 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:29.514 14:08:30 -- bdev/blockdev.sh@47 -- # waitforlisten 61147 00:07:29.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.514 14:08:30 -- common/autotest_common.sh@829 -- # '[' -z 61147 ']' 00:07:29.514 14:08:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.514 14:08:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.514 14:08:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.514 14:08:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.514 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:29.514 14:08:30 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:29.514 [2024-12-04 14:08:30.887895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.514 [2024-12-04 14:08:30.888012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61147 ] 00:07:29.775 [2024-12-04 14:08:31.033977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.775 [2024-12-04 14:08:31.212659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.775 [2024-12-04 14:08:31.212874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.160 14:08:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.160 14:08:32 -- common/autotest_common.sh@862 -- # return 0 00:07:31.160 14:08:32 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:31.160 14:08:32 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:07:31.160 14:08:32 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.682 Waiting for block devices as requested 00:07:31.682 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.682 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.682 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.943 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.239 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:07:37.239 14:08:38 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:07:37.239 14:08:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:07:37.239 14:08:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:07:37.239 14:08:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:07:37.239 14:08:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:07:37.239 14:08:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:37.239 14:08:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:07:37.239 14:08:38 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:07:37.239 14:08:38 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:07:37.239 14:08:38 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:07:37.239 14:08:38 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:37.239 14:08:38 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:07:37.239 14:08:38 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:07:37.239 14:08:38 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:07:37.239 14:08:38 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:07:37.239 BYT; 00:07:37.239 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:37.239 14:08:38 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:07:37.239 BYT; 00:07:37.239 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:37.239 14:08:38 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:07:37.239 14:08:38 -- bdev/blockdev.sh@114 -- # break 00:07:37.239 14:08:38 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:07:37.239 14:08:38 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:37.239 14:08:38 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:37.239 14:08:38 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:37.239 14:08:38 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:07:37.239 14:08:38 -- scripts/common.sh@410 -- # local spdk_guid 00:07:37.239 14:08:38 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:37.239 14:08:38 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.239 14:08:38 -- scripts/common.sh@415 -- # IFS='()' 00:07:37.239 14:08:38 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:07:37.240 14:08:38 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.240 14:08:38 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:37.240 14:08:38 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.240 14:08:38 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.240 14:08:38 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.240 14:08:38 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:07:37.240 14:08:38 -- scripts/common.sh@422 -- # local spdk_guid 00:07:37.240 14:08:38 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:37.240 14:08:38 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.240 14:08:38 -- scripts/common.sh@427 -- # IFS='()' 00:07:37.240 14:08:38 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:07:37.240 14:08:38 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.240 14:08:38 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:37.240 14:08:38 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.240 14:08:38 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.240 14:08:38 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.240 14:08:38 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:07:38.175 The operation has completed successfully. 00:07:38.175 14:08:39 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:07:39.110 The operation has completed successfully. 00:07:39.110 14:08:40 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.935 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.935 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.935 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.935 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.935 14:08:41 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:07:39.935 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.935 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:39.935 [] 00:07:39.935 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.935 14:08:41 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:07:39.935 14:08:41 -- bdev/blockdev.sh@79 -- # local json 00:07:39.935 14:08:41 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:39.935 14:08:41 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.935 14:08:41 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:39.935 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.935 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.195 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.195 14:08:41 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:40.195 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.195 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.456 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.457 14:08:41 -- bdev/blockdev.sh@738 -- # cat 00:07:40.457 14:08:41 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.457 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.457 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.457 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.457 14:08:41 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.457 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.457 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.457 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.457 14:08:41 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.457 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.457 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.457 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.457 14:08:41 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:40.457 14:08:41 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:40.457 14:08:41 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.457 14:08:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.457 14:08:41 -- common/autotest_common.sh@10 -- # set +x 00:07:40.457 14:08:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.457 14:08:41 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:40.457 14:08:41 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:40.457 14:08:41 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e0e78e8d-9904-47b0-a6fb-335fae59d69d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e0e78e8d-9904-47b0-a6fb-335fae59d69d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "bb3d494d-48e7-4245-ab2e-9dc985de5d15"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb3d494d-48e7-4245-ab2e-9dc985de5d15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8779781d-e485-41ac-bc8c-c35319a8f068"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8779781d-e485-41ac-bc8c-c35319a8f068",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "185fb1b1-168e-4430-8166-834853c05d0e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "185fb1b1-168e-4430-8166-834853c05d0e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "da4e0ba2-bcfa-4f20-8e04-c27dbd549cbb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "da4e0ba2-bcfa-4f20-8e04-c27dbd549cbb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:40.457 14:08:41 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.457 14:08:41 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:07:40.457 14:08:41 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:40.457 14:08:41 -- bdev/blockdev.sh@752 -- # killprocess 61147 00:07:40.457 14:08:41 -- common/autotest_common.sh@936 -- # '[' -z 61147 ']' 00:07:40.457 14:08:41 -- common/autotest_common.sh@940 -- # kill -0 61147 00:07:40.457 14:08:41 -- common/autotest_common.sh@941 -- # uname 00:07:40.457 14:08:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.457 14:08:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61147 00:07:40.457 14:08:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.457 killing process with pid 61147 00:07:40.457 14:08:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.457 14:08:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61147' 00:07:40.457 14:08:41 -- common/autotest_common.sh@955 -- # kill 61147 00:07:40.457 14:08:41 -- common/autotest_common.sh@960 -- # wait 61147 00:07:41.841 14:08:43 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:41.841 14:08:43 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:07:41.841 14:08:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:41.841 14:08:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.841 14:08:43 -- common/autotest_common.sh@10 -- # set +x 00:07:41.841 ************************************ 00:07:41.841 START TEST bdev_hello_world 00:07:41.841 ************************************ 00:07:41.841 14:08:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:07:41.841 [2024-12-04 14:08:43.230008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.841 [2024-12-04 14:08:43.230131] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61800 ] 00:07:42.100 [2024-12-04 14:08:43.375434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.100 [2024-12-04 14:08:43.513451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.666 [2024-12-04 14:08:43.980121] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:42.666 [2024-12-04 14:08:43.980157] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:07:42.666 [2024-12-04 14:08:43.980171] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:42.666 [2024-12-04 14:08:43.982065] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:42.666 [2024-12-04 14:08:43.982535] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:42.666 [2024-12-04 14:08:43.982560] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:42.666 [2024-12-04 14:08:43.982715] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:42.666 00:07:42.666 [2024-12-04 14:08:43.982744] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:43.233 00:07:43.233 real 0m1.427s 00:07:43.233 user 0m1.161s 00:07:43.233 sys 0m0.160s 00:07:43.233 ************************************ 00:07:43.233 END TEST bdev_hello_world 00:07:43.233 ************************************ 00:07:43.233 14:08:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.233 14:08:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 14:08:44 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:43.233 14:08:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.233 14:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.233 14:08:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 ************************************ 00:07:43.233 START TEST bdev_bounds 00:07:43.233 ************************************ 00:07:43.233 14:08:44 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:43.233 14:08:44 -- bdev/blockdev.sh@288 -- # bdevio_pid=61837 00:07:43.233 Process bdevio pid: 61837 00:07:43.233 14:08:44 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:43.233 14:08:44 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61837' 00:07:43.233 14:08:44 -- bdev/blockdev.sh@291 -- # waitforlisten 61837 00:07:43.233 14:08:44 -- common/autotest_common.sh@829 -- # '[' -z 61837 ']' 00:07:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.233 14:08:44 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:43.233 14:08:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.233 14:08:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.233 14:08:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.233 14:08:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.233 14:08:44 -- common/autotest_common.sh@10 -- # set +x 00:07:43.492 [2024-12-04 14:08:44.717596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.492 [2024-12-04 14:08:44.717682] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61837 ] 00:07:43.492 [2024-12-04 14:08:44.860409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.767 [2024-12-04 14:08:45.001659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.767 [2024-12-04 14:08:45.001830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.768 [2024-12-04 14:08:45.001929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.338 14:08:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.338 14:08:45 -- common/autotest_common.sh@862 -- # return 0 00:07:44.338 14:08:45 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:44.338 I/O targets: 00:07:44.338 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:07:44.338 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:07:44.338 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:44.338 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:44.338 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:44.338 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:44.338 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:44.338 00:07:44.338 00:07:44.338 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.338 http://cunit.sourceforge.net/ 00:07:44.338 00:07:44.338 00:07:44.338 Suite: bdevio tests on: Nvme3n1 00:07:44.338 Test: blockdev write read block ...passed 00:07:44.338 Test: blockdev write zeroes read block ...passed 00:07:44.338 Test: blockdev write zeroes read no split ...passed 00:07:44.338 Test: blockdev write zeroes read split ...passed 00:07:44.338 Test: blockdev write zeroes read split partial ...passed 00:07:44.338 Test: blockdev reset ...[2024-12-04 14:08:45.677744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:44.338 [2024-12-04 14:08:45.680383] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.338 passed 00:07:44.338 Test: blockdev write read 8 blocks ...passed 00:07:44.338 Test: blockdev write read size > 128k ...passed 00:07:44.338 Test: blockdev write read invalid size ...passed 00:07:44.338 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.338 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.338 Test: blockdev write read max offset ...passed 00:07:44.338 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.338 Test: blockdev writev readv 8 blocks ...passed 00:07:44.338 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.338 Test: blockdev writev readv block ...passed 00:07:44.338 Test: blockdev writev readv size > 128k ...passed 00:07:44.338 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.338 Test: blockdev comparev and writev ...[2024-12-04 14:08:45.688450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27740a000 len:0x1000 00:07:44.338 [2024-12-04 14:08:45.688594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.338 passed 00:07:44.338 Test: blockdev nvme passthru rw ...passed 00:07:44.338 Test: blockdev nvme passthru vendor specific ...[2024-12-04 14:08:45.689368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:44.338 [2024-12-04 14:08:45.689468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:44.338 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:44.338 passed 00:07:44.338 Test: blockdev copy ...passed 00:07:44.338 Suite: bdevio tests on: Nvme2n3 00:07:44.338 Test: blockdev write read block ...passed 00:07:44.338 Test: blockdev write zeroes read block ...passed 00:07:44.338 Test: blockdev write zeroes read no split ...passed 00:07:44.338 Test: blockdev write zeroes read split ...passed 00:07:44.338 Test: blockdev write zeroes read split partial ...passed 00:07:44.338 Test: blockdev reset ...[2024-12-04 14:08:45.744501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:44.338 [2024-12-04 14:08:45.747159] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.338 passed 00:07:44.338 Test: blockdev write read 8 blocks ...passed 00:07:44.338 Test: blockdev write read size > 128k ...passed 00:07:44.338 Test: blockdev write read invalid size ...passed 00:07:44.338 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.338 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.338 Test: blockdev write read max offset ...passed 00:07:44.338 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.338 Test: blockdev writev readv 8 blocks ...passed 00:07:44.338 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.338 Test: blockdev writev readv block ...passed 00:07:44.338 Test: blockdev writev readv size > 128k ...passed 00:07:44.338 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.338 Test: blockdev comparev and writev ...[2024-12-04 14:08:45.754877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26cf04000 len:0x1000 00:07:44.338 [2024-12-04 14:08:45.754997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.338 passed 00:07:44.338 Test: blockdev nvme passthru rw ...passed 00:07:44.338 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.338 Test: blockdev nvme admin passthru ...[2024-12-04 14:08:45.755693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:44.339 [2024-12-04 14:08:45.755723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.339 passed 00:07:44.339 Test: blockdev copy ...passed 00:07:44.339 Suite: bdevio tests on: Nvme2n2 00:07:44.339 Test: blockdev write read block ...passed 00:07:44.339 Test: blockdev write zeroes read block ...passed 00:07:44.339 Test: blockdev write zeroes read no split ...passed 00:07:44.339 Test: blockdev write zeroes read split ...passed 00:07:44.597 Test: blockdev write zeroes read split partial ...passed 00:07:44.597 Test: blockdev reset ...[2024-12-04 14:08:45.812812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:44.597 [2024-12-04 14:08:45.815298] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.597 passed 00:07:44.597 Test: blockdev write read 8 blocks ...passed 00:07:44.597 Test: blockdev write read size > 128k ...passed 00:07:44.597 Test: blockdev write read invalid size ...passed 00:07:44.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.597 Test: blockdev write read max offset ...passed 00:07:44.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.597 Test: blockdev writev readv 8 blocks ...passed 00:07:44.597 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.597 Test: blockdev writev readv block ...passed 00:07:44.597 Test: blockdev writev readv size > 128k ...passed 00:07:44.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.597 Test: blockdev comparev and writev ...[2024-12-04 14:08:45.822380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26cf04000 len:0x1000 00:07:44.597 [2024-12-04 14:08:45.822415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.597 passed 00:07:44.597 Test: blockdev nvme passthru rw ...passed 00:07:44.597 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.597 Test: blockdev nvme admin passthru ...[2024-12-04 14:08:45.823120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:44.597 [2024-12-04 14:08:45.823148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.597 passed 00:07:44.597 Test: blockdev copy ...passed 00:07:44.597 Suite: bdevio tests on: Nvme2n1 00:07:44.597 Test: blockdev write read block ...passed 00:07:44.597 Test: blockdev write zeroes read block ...passed 00:07:44.597 Test: blockdev write zeroes read no split ...passed 00:07:44.597 Test: blockdev write zeroes read split ...passed 00:07:44.597 Test: blockdev write zeroes read split partial ...passed 00:07:44.597 Test: blockdev reset ...[2024-12-04 14:08:45.878897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:44.597 passed 00:07:44.597 Test: blockdev write read 8 blocks ...[2024-12-04 14:08:45.881373] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.597 passed 00:07:44.597 Test: blockdev write read size > 128k ...passed 00:07:44.597 Test: blockdev write read invalid size ...passed 00:07:44.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.597 Test: blockdev write read max offset ...passed 00:07:44.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.597 Test: blockdev writev readv 8 blocks ...passed 00:07:44.597 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.597 Test: blockdev writev readv block ...passed 00:07:44.597 Test: blockdev writev readv size > 128k ...passed 00:07:44.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.597 Test: blockdev comparev and writev ...[2024-12-04 14:08:45.887854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28103c000 len:0x1000 00:07:44.597 [2024-12-04 14:08:45.887891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.597 passed 00:07:44.597 Test: blockdev nvme passthru rw ...passed 00:07:44.597 Test: blockdev nvme passthru vendor specific ...[2024-12-04 14:08:45.888481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:44.597 Test: blockdev nvme admin passthru ...passed 00:07:44.597 Test: blockdev copy ...RP2 0x0 00:07:44.597 [2024-12-04 14:08:45.888569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:44.597 passed 00:07:44.597 Suite: bdevio tests on: Nvme1n1 00:07:44.597 Test: blockdev write read block ...passed 00:07:44.597 Test: blockdev write zeroes read block ...passed 00:07:44.597 Test: blockdev write zeroes read no split ...passed 00:07:44.597 Test: blockdev write zeroes read split ...passed 00:07:44.597 Test: blockdev write zeroes read split partial ...passed 00:07:44.597 Test: blockdev reset ...[2024-12-04 14:08:45.931473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:44.597 [2024-12-04 14:08:45.933701] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.597 passed 00:07:44.597 Test: blockdev write read 8 blocks ...passed 00:07:44.597 Test: blockdev write read size > 128k ...passed 00:07:44.597 Test: blockdev write read invalid size ...passed 00:07:44.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.597 Test: blockdev write read max offset ...passed 00:07:44.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.597 Test: blockdev writev readv 8 blocks ...passed 00:07:44.597 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.597 Test: blockdev writev readv block ...passed 00:07:44.597 Test: blockdev writev readv size > 128k ...passed 00:07:44.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.597 Test: blockdev comparev and writev ...[2024-12-04 14:08:45.941296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x281038000 len:0x1000 00:07:44.597 [2024-12-04 14:08:45.941409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:44.597 passed 00:07:44.597 Test: blockdev nvme passthru rw ...passed 00:07:44.597 Test: blockdev nvme passthru vendor specific ...[2024-12-04 14:08:45.942253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:44.597 [2024-12-04 14:08:45.942353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:44.597 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:44.597 passed 00:07:44.597 Test: blockdev copy ...passed 00:07:44.597 Suite: bdevio tests on: Nvme0n1p2 00:07:44.597 Test: blockdev write read block ...passed 00:07:44.597 Test: blockdev write zeroes read block ...passed 00:07:44.597 Test: blockdev write zeroes read no split ...passed 00:07:44.597 Test: blockdev write zeroes read split ...passed 00:07:44.597 Test: blockdev write zeroes read split partial ...passed 00:07:44.597 Test: blockdev reset ...[2024-12-04 14:08:45.997927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:44.597 [2024-12-04 14:08:46.000240] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.597 passed 00:07:44.597 Test: blockdev write read 8 blocks ...passed 00:07:44.597 Test: blockdev write read size > 128k ...passed 00:07:44.597 Test: blockdev write read invalid size ...passed 00:07:44.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.597 Test: blockdev write read max offset ...passed 00:07:44.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.597 Test: blockdev writev readv 8 blocks ...passed 00:07:44.597 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.597 Test: blockdev writev readv block ...passed 00:07:44.597 Test: blockdev writev readv size > 128k ...passed 00:07:44.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.597 Test: blockdev comparev and writev ...passed 00:07:44.597 Test: blockdev nvme passthru rw ...passed[2024-12-04 14:08:46.007571] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:07:44.597 separate metadata which is not supported yet. 00:07:44.597 00:07:44.597 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.597 Test: blockdev nvme admin passthru ...passed 00:07:44.597 Test: blockdev copy ...passed 00:07:44.597 Suite: bdevio tests on: Nvme0n1p1 00:07:44.597 Test: blockdev write read block ...passed 00:07:44.597 Test: blockdev write zeroes read block ...passed 00:07:44.597 Test: blockdev write zeroes read no split ...passed 00:07:44.597 Test: blockdev write zeroes read split ...passed 00:07:44.597 Test: blockdev write zeroes read split partial ...passed 00:07:44.597 Test: blockdev reset ...[2024-12-04 14:08:46.050741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:44.597 [2024-12-04 14:08:46.053054] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:44.597 passed 00:07:44.597 Test: blockdev write read 8 blocks ...passed 00:07:44.597 Test: blockdev write read size > 128k ...passed 00:07:44.597 Test: blockdev write read invalid size ...passed 00:07:44.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:44.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:44.597 Test: blockdev write read max offset ...passed 00:07:44.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:44.597 Test: blockdev writev readv 8 blocks ...passed 00:07:44.597 Test: blockdev writev readv 30 x 1block ...passed 00:07:44.597 Test: blockdev writev readv block ...passed 00:07:44.597 Test: blockdev writev readv size > 128k ...passed 00:07:44.598 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:44.598 Test: blockdev comparev and writev ...passed 00:07:44.598 Test: blockdev nvme passthru rw ...passed 00:07:44.598 Test: blockdev nvme passthru vendor specific ...passed 00:07:44.598 Test: blockdev nvme admin passthru ...passed 00:07:44.598 Test: blockdev copy ...[2024-12-04 14:08:46.060827] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:07:44.598 separate metadata which is not supported yet. 00:07:44.856 passed 00:07:44.856 00:07:44.856 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.856 suites 7 7 n/a 0 0 00:07:44.856 tests 161 161 161 0 0 00:07:44.856 asserts 1006 1006 1006 0 n/a 00:07:44.856 00:07:44.856 Elapsed time = 1.160 seconds 00:07:44.856 0 00:07:44.856 14:08:46 -- bdev/blockdev.sh@293 -- # killprocess 61837 00:07:44.856 14:08:46 -- common/autotest_common.sh@936 -- # '[' -z 61837 ']' 00:07:44.856 14:08:46 -- common/autotest_common.sh@940 -- # kill -0 61837 00:07:44.856 14:08:46 -- common/autotest_common.sh@941 -- # uname 00:07:44.856 14:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:44.856 14:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61837 00:07:44.856 14:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:44.856 14:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:44.856 14:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61837' 00:07:44.856 killing process with pid 61837 00:07:44.856 14:08:46 -- common/autotest_common.sh@955 -- # kill 61837 00:07:44.856 14:08:46 -- common/autotest_common.sh@960 -- # wait 61837 00:07:45.428 14:08:46 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:45.428 00:07:45.428 real 0m1.926s 00:07:45.428 user 0m4.726s 00:07:45.428 sys 0m0.264s 00:07:45.428 14:08:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.428 ************************************ 00:07:45.428 END TEST bdev_bounds 00:07:45.428 ************************************ 00:07:45.428 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.428 14:08:46 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:45.428 14:08:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:45.428 14:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.428 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.428 ************************************ 00:07:45.428 START TEST bdev_nbd 00:07:45.428 ************************************ 00:07:45.428 14:08:46 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:45.428 14:08:46 -- bdev/blockdev.sh@298 -- # uname -s 00:07:45.428 14:08:46 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:45.428 14:08:46 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.428 14:08:46 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.428 14:08:46 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.428 14:08:46 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:45.428 14:08:46 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:07:45.428 14:08:46 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:45.428 14:08:46 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:45.428 14:08:46 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:45.428 14:08:46 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:07:45.428 14:08:46 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.428 14:08:46 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:45.428 14:08:46 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.428 14:08:46 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:45.428 14:08:46 -- bdev/blockdev.sh@316 -- # nbd_pid=61893 00:07:45.428 14:08:46 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:45.428 14:08:46 -- bdev/blockdev.sh@318 -- # waitforlisten 61893 /var/tmp/spdk-nbd.sock 00:07:45.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:45.428 14:08:46 -- common/autotest_common.sh@829 -- # '[' -z 61893 ']' 00:07:45.428 14:08:46 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:45.428 14:08:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:45.428 14:08:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.428 14:08:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:45.428 14:08:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.428 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:07:45.428 [2024-12-04 14:08:46.722558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.428 [2024-12-04 14:08:46.722666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.429 [2024-12-04 14:08:46.872350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.690 [2024-12-04 14:08:47.052176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.129 14:08:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.129 14:08:48 -- common/autotest_common.sh@862 -- # return 0 00:07:47.129 14:08:48 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@24 -- # local i 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:47.129 14:08:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:47.129 14:08:48 -- common/autotest_common.sh@867 -- # local i 00:07:47.129 14:08:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.129 14:08:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.129 14:08:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:47.129 14:08:48 -- common/autotest_common.sh@871 -- # break 00:07:47.129 14:08:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.129 14:08:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.129 14:08:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.129 1+0 records in 00:07:47.129 1+0 records out 00:07:47.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00260182 s, 1.6 MB/s 00:07:47.129 14:08:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.129 14:08:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.129 14:08:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.129 14:08:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.129 14:08:48 -- common/autotest_common.sh@887 -- # return 0 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.129 14:08:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:07:47.391 14:08:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:47.391 14:08:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:47.391 14:08:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:47.391 14:08:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:47.391 14:08:48 -- common/autotest_common.sh@867 -- # local i 00:07:47.391 14:08:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.391 14:08:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.391 14:08:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:47.391 14:08:48 -- common/autotest_common.sh@871 -- # break 00:07:47.391 14:08:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.391 14:08:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.391 14:08:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.391 1+0 records in 00:07:47.391 1+0 records out 00:07:47.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799309 s, 5.1 MB/s 00:07:47.391 14:08:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.391 14:08:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.391 14:08:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.391 14:08:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.391 14:08:48 -- common/autotest_common.sh@887 -- # return 0 00:07:47.391 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.391 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.391 14:08:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:47.653 14:08:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:47.653 14:08:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:47.653 14:08:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:47.654 14:08:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:47.654 14:08:48 -- common/autotest_common.sh@867 -- # local i 00:07:47.654 14:08:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.654 14:08:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.654 14:08:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:47.654 14:08:48 -- common/autotest_common.sh@871 -- # break 00:07:47.654 14:08:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.654 14:08:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.654 14:08:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.654 1+0 records in 00:07:47.654 1+0 records out 00:07:47.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904004 s, 4.5 MB/s 00:07:47.654 14:08:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.654 14:08:48 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.654 14:08:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.654 14:08:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.654 14:08:48 -- common/autotest_common.sh@887 -- # return 0 00:07:47.654 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.654 14:08:48 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.654 14:08:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:47.654 14:08:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:47.654 14:08:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:47.654 14:08:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:47.654 14:08:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:47.654 14:08:49 -- common/autotest_common.sh@867 -- # local i 00:07:47.654 14:08:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.654 14:08:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.654 14:08:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:47.654 14:08:49 -- common/autotest_common.sh@871 -- # break 00:07:47.654 14:08:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.654 14:08:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.654 14:08:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.654 1+0 records in 00:07:47.654 1+0 records out 00:07:47.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853356 s, 4.8 MB/s 00:07:47.654 14:08:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.654 14:08:49 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.654 14:08:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.654 14:08:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.654 14:08:49 -- common/autotest_common.sh@887 -- # return 0 00:07:47.654 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.654 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:47.915 14:08:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:47.915 14:08:49 -- common/autotest_common.sh@867 -- # local i 00:07:47.915 14:08:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.915 14:08:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.915 14:08:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:47.915 14:08:49 -- common/autotest_common.sh@871 -- # break 00:07:47.915 14:08:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.915 14:08:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.915 14:08:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.915 1+0 records in 00:07:47.915 1+0 records out 00:07:47.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101051 s, 4.1 MB/s 00:07:47.915 14:08:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.915 14:08:49 -- common/autotest_common.sh@884 -- # size=4096 00:07:47.915 14:08:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.915 14:08:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.915 14:08:49 -- common/autotest_common.sh@887 -- # return 0 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:47.915 14:08:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:48.177 14:08:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:48.177 14:08:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:48.177 14:08:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:48.177 14:08:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:48.177 14:08:49 -- common/autotest_common.sh@867 -- # local i 00:07:48.177 14:08:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:48.177 14:08:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:48.177 14:08:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:48.177 14:08:49 -- common/autotest_common.sh@871 -- # break 00:07:48.177 14:08:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:48.177 14:08:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:48.177 14:08:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.177 1+0 records in 00:07:48.177 1+0 records out 00:07:48.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103815 s, 3.9 MB/s 00:07:48.177 14:08:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.177 14:08:49 -- common/autotest_common.sh@884 -- # size=4096 00:07:48.177 14:08:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.177 14:08:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:48.177 14:08:49 -- common/autotest_common.sh@887 -- # return 0 00:07:48.177 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:48.177 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:48.177 14:08:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:48.438 14:08:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:48.438 14:08:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:48.438 14:08:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:48.438 14:08:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:48.438 14:08:49 -- common/autotest_common.sh@867 -- # local i 00:07:48.438 14:08:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:48.439 14:08:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:48.439 14:08:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:48.439 14:08:49 -- common/autotest_common.sh@871 -- # break 00:07:48.439 14:08:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:48.439 14:08:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:48.439 14:08:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.439 1+0 records in 00:07:48.439 1+0 records out 00:07:48.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115123 s, 3.6 MB/s 00:07:48.439 14:08:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.439 14:08:49 -- common/autotest_common.sh@884 -- # size=4096 00:07:48.439 14:08:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.439 14:08:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:48.439 14:08:49 -- common/autotest_common.sh@887 -- # return 0 00:07:48.439 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:48.439 14:08:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:48.439 14:08:49 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.699 14:08:49 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:48.699 { 00:07:48.699 "nbd_device": "/dev/nbd0", 00:07:48.699 "bdev_name": "Nvme0n1p1" 00:07:48.699 }, 00:07:48.699 { 00:07:48.700 "nbd_device": "/dev/nbd1", 00:07:48.700 "bdev_name": "Nvme0n1p2" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd2", 00:07:48.700 "bdev_name": "Nvme1n1" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd3", 00:07:48.700 "bdev_name": "Nvme2n1" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd4", 00:07:48.700 "bdev_name": "Nvme2n2" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd5", 00:07:48.700 "bdev_name": "Nvme2n3" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd6", 00:07:48.700 "bdev_name": "Nvme3n1" 00:07:48.700 } 00:07:48.700 ]' 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd0", 00:07:48.700 "bdev_name": "Nvme0n1p1" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd1", 00:07:48.700 "bdev_name": "Nvme0n1p2" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd2", 00:07:48.700 "bdev_name": "Nvme1n1" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd3", 00:07:48.700 "bdev_name": "Nvme2n1" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd4", 00:07:48.700 "bdev_name": "Nvme2n2" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd5", 00:07:48.700 "bdev_name": "Nvme2n3" 00:07:48.700 }, 00:07:48.700 { 00:07:48.700 "nbd_device": "/dev/nbd6", 00:07:48.700 "bdev_name": "Nvme3n1" 00:07:48.700 } 00:07:48.700 ]' 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@51 -- # local i 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.700 14:08:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@41 -- # break 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.961 14:08:50 -- bdev/nbd_common.sh@41 -- # break 00:07:48.962 14:08:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.962 14:08:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.962 14:08:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:49.223 14:08:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:49.223 14:08:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:49.223 14:08:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:49.223 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.224 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.224 14:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:49.224 14:08:50 -- bdev/nbd_common.sh@41 -- # break 00:07:49.224 14:08:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.224 14:08:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.224 14:08:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@41 -- # break 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.485 14:08:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:49.747 14:08:50 -- bdev/nbd_common.sh@41 -- # break 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@41 -- # break 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.747 14:08:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@41 -- # break 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.008 14:08:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@65 -- # true 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@65 -- # count=0 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@122 -- # count=0 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@127 -- # return 0 00:07:50.270 14:08:51 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@12 -- # local i 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.270 14:08:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:07:50.529 /dev/nbd0 00:07:50.529 14:08:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:50.529 14:08:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:50.529 14:08:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:50.529 14:08:51 -- common/autotest_common.sh@867 -- # local i 00:07:50.529 14:08:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.529 14:08:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.529 14:08:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:50.529 14:08:51 -- common/autotest_common.sh@871 -- # break 00:07:50.529 14:08:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.529 14:08:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.529 14:08:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.529 1+0 records in 00:07:50.529 1+0 records out 00:07:50.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629716 s, 6.5 MB/s 00:07:50.529 14:08:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.529 14:08:51 -- common/autotest_common.sh@884 -- # size=4096 00:07:50.529 14:08:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.529 14:08:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.529 14:08:51 -- common/autotest_common.sh@887 -- # return 0 00:07:50.529 14:08:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.529 14:08:51 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.529 14:08:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:07:50.787 /dev/nbd1 00:07:50.787 14:08:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:50.787 14:08:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:50.787 14:08:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:50.787 14:08:52 -- common/autotest_common.sh@867 -- # local i 00:07:50.787 14:08:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.787 14:08:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.787 14:08:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:50.787 14:08:52 -- common/autotest_common.sh@871 -- # break 00:07:50.787 14:08:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.787 14:08:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.787 14:08:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.787 1+0 records in 00:07:50.787 1+0 records out 00:07:50.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451043 s, 9.1 MB/s 00:07:50.787 14:08:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.787 14:08:52 -- common/autotest_common.sh@884 -- # size=4096 00:07:50.787 14:08:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:50.787 14:08:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.787 14:08:52 -- common/autotest_common.sh@887 -- # return 0 00:07:50.787 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.787 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:50.787 14:08:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:07:50.787 /dev/nbd10 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:51.046 14:08:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:51.046 14:08:52 -- common/autotest_common.sh@867 -- # local i 00:07:51.046 14:08:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:51.046 14:08:52 -- common/autotest_common.sh@871 -- # break 00:07:51.046 14:08:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.046 1+0 records in 00:07:51.046 1+0 records out 00:07:51.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437829 s, 9.4 MB/s 00:07:51.046 14:08:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.046 14:08:52 -- common/autotest_common.sh@884 -- # size=4096 00:07:51.046 14:08:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.046 14:08:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.046 14:08:52 -- common/autotest_common.sh@887 -- # return 0 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:51.046 /dev/nbd11 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:51.046 14:08:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:51.046 14:08:52 -- common/autotest_common.sh@867 -- # local i 00:07:51.046 14:08:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:51.046 14:08:52 -- common/autotest_common.sh@871 -- # break 00:07:51.046 14:08:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.046 14:08:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.046 1+0 records in 00:07:51.046 1+0 records out 00:07:51.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359335 s, 11.4 MB/s 00:07:51.046 14:08:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.046 14:08:52 -- common/autotest_common.sh@884 -- # size=4096 00:07:51.046 14:08:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.046 14:08:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.046 14:08:52 -- common/autotest_common.sh@887 -- # return 0 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:51.046 14:08:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:51.305 /dev/nbd12 00:07:51.305 14:08:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:51.305 14:08:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:51.305 14:08:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:51.305 14:08:52 -- common/autotest_common.sh@867 -- # local i 00:07:51.305 14:08:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.305 14:08:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.305 14:08:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:51.305 14:08:52 -- common/autotest_common.sh@871 -- # break 00:07:51.305 14:08:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.305 14:08:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.305 14:08:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.305 1+0 records in 00:07:51.305 1+0 records out 00:07:51.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491916 s, 8.3 MB/s 00:07:51.305 14:08:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.305 14:08:52 -- common/autotest_common.sh@884 -- # size=4096 00:07:51.305 14:08:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.305 14:08:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.305 14:08:52 -- common/autotest_common.sh@887 -- # return 0 00:07:51.305 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.305 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:51.305 14:08:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:51.562 /dev/nbd13 00:07:51.562 14:08:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:51.562 14:08:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:51.562 14:08:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:51.562 14:08:52 -- common/autotest_common.sh@867 -- # local i 00:07:51.562 14:08:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.562 14:08:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.562 14:08:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:51.562 14:08:52 -- common/autotest_common.sh@871 -- # break 00:07:51.562 14:08:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.562 14:08:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.562 14:08:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.562 1+0 records in 00:07:51.562 1+0 records out 00:07:51.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424419 s, 9.7 MB/s 00:07:51.562 14:08:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.562 14:08:52 -- common/autotest_common.sh@884 -- # size=4096 00:07:51.562 14:08:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.562 14:08:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.562 14:08:52 -- common/autotest_common.sh@887 -- # return 0 00:07:51.562 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.562 14:08:52 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:51.562 14:08:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:51.820 /dev/nbd14 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:51.820 14:08:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:51.820 14:08:53 -- common/autotest_common.sh@867 -- # local i 00:07:51.820 14:08:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.820 14:08:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.820 14:08:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:51.820 14:08:53 -- common/autotest_common.sh@871 -- # break 00:07:51.820 14:08:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.820 14:08:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.820 14:08:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.820 1+0 records in 00:07:51.820 1+0 records out 00:07:51.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712418 s, 5.7 MB/s 00:07:51.820 14:08:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.820 14:08:53 -- common/autotest_common.sh@884 -- # size=4096 00:07:51.820 14:08:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:51.820 14:08:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.820 14:08:53 -- common/autotest_common.sh@887 -- # return 0 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.820 14:08:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.079 14:08:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd0", 00:07:52.079 "bdev_name": "Nvme0n1p1" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd1", 00:07:52.079 "bdev_name": "Nvme0n1p2" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd10", 00:07:52.079 "bdev_name": "Nvme1n1" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd11", 00:07:52.079 "bdev_name": "Nvme2n1" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd12", 00:07:52.079 "bdev_name": "Nvme2n2" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd13", 00:07:52.079 "bdev_name": "Nvme2n3" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd14", 00:07:52.079 "bdev_name": "Nvme3n1" 00:07:52.079 } 00:07:52.079 ]' 00:07:52.079 14:08:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd0", 00:07:52.079 "bdev_name": "Nvme0n1p1" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd1", 00:07:52.079 "bdev_name": "Nvme0n1p2" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd10", 00:07:52.079 "bdev_name": "Nvme1n1" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd11", 00:07:52.079 "bdev_name": "Nvme2n1" 00:07:52.079 }, 00:07:52.079 { 00:07:52.079 "nbd_device": "/dev/nbd12", 00:07:52.080 "bdev_name": "Nvme2n2" 00:07:52.080 }, 00:07:52.080 { 00:07:52.080 "nbd_device": "/dev/nbd13", 00:07:52.080 "bdev_name": "Nvme2n3" 00:07:52.080 }, 00:07:52.080 { 00:07:52.080 "nbd_device": "/dev/nbd14", 00:07:52.080 "bdev_name": "Nvme3n1" 00:07:52.080 } 00:07:52.080 ]' 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:52.080 /dev/nbd1 00:07:52.080 /dev/nbd10 00:07:52.080 /dev/nbd11 00:07:52.080 /dev/nbd12 00:07:52.080 /dev/nbd13 00:07:52.080 /dev/nbd14' 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:52.080 /dev/nbd1 00:07:52.080 /dev/nbd10 00:07:52.080 /dev/nbd11 00:07:52.080 /dev/nbd12 00:07:52.080 /dev/nbd13 00:07:52.080 /dev/nbd14' 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@65 -- # count=7 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@66 -- # echo 7 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@95 -- # count=7 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:52.080 256+0 records in 00:07:52.080 256+0 records out 00:07:52.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511036 s, 205 MB/s 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:52.080 256+0 records in 00:07:52.080 256+0 records out 00:07:52.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0640312 s, 16.4 MB/s 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.080 14:08:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:52.341 256+0 records in 00:07:52.341 256+0 records out 00:07:52.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167147 s, 6.3 MB/s 00:07:52.341 14:08:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.341 14:08:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:52.341 256+0 records in 00:07:52.341 256+0 records out 00:07:52.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178135 s, 5.9 MB/s 00:07:52.341 14:08:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.341 14:08:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:52.602 256+0 records in 00:07:52.602 256+0 records out 00:07:52.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223997 s, 4.7 MB/s 00:07:52.602 14:08:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.602 14:08:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:52.863 256+0 records in 00:07:52.863 256+0 records out 00:07:52.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114147 s, 9.2 MB/s 00:07:52.863 14:08:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.863 14:08:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:52.863 256+0 records in 00:07:52.863 256+0 records out 00:07:52.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102711 s, 10.2 MB/s 00:07:52.863 14:08:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.863 14:08:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:53.124 256+0 records in 00:07:53.124 256+0 records out 00:07:53.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231904 s, 4.5 MB/s 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@51 -- # local i 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.124 14:08:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@41 -- # break 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.384 14:08:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@41 -- # break 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.643 14:08:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@41 -- # break 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@41 -- # break 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.901 14:08:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@41 -- # break 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.159 14:08:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@41 -- # break 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.416 14:08:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@41 -- # break 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.674 14:08:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.674 14:08:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:54.674 14:08:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:54.674 14:08:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@65 -- # true 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@65 -- # count=0 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@104 -- # count=0 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@109 -- # return 0 00:07:54.932 14:08:56 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:54.932 malloc_lvol_verify 00:07:54.932 14:08:56 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:55.189 10190190-bee7-467e-9bb0-2888931280c7 00:07:55.189 14:08:56 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:55.446 f4563afe-56ee-42d9-bcc0-25b455282f87 00:07:55.446 14:08:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:55.704 /dev/nbd0 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:55.704 mke2fs 1.47.0 (5-Feb-2023) 00:07:55.704 Discarding device blocks: 0/4096 done 00:07:55.704 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:55.704 00:07:55.704 Allocating group tables: 0/1 done 00:07:55.704 Writing inode tables: 0/1 done 00:07:55.704 Creating journal (1024 blocks): done 00:07:55.704 Writing superblocks and filesystem accounting information: 0/1 done 00:07:55.704 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@51 -- # local i 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.704 14:08:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@41 -- # break 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:55.704 14:08:57 -- bdev/nbd_common.sh@147 -- # return 0 00:07:55.704 14:08:57 -- bdev/blockdev.sh@324 -- # killprocess 61893 00:07:55.704 14:08:57 -- common/autotest_common.sh@936 -- # '[' -z 61893 ']' 00:07:55.704 14:08:57 -- common/autotest_common.sh@940 -- # kill -0 61893 00:07:55.704 14:08:57 -- common/autotest_common.sh@941 -- # uname 00:07:55.704 14:08:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.704 14:08:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61893 00:07:55.962 14:08:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.962 14:08:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.962 killing process with pid 61893 00:07:55.962 14:08:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61893' 00:07:55.962 14:08:57 -- common/autotest_common.sh@955 -- # kill 61893 00:07:55.962 14:08:57 -- common/autotest_common.sh@960 -- # wait 61893 00:07:56.531 14:08:57 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:07:56.532 00:07:56.532 real 0m11.196s 00:07:56.532 user 0m15.425s 00:07:56.532 sys 0m3.476s 00:07:56.532 14:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.532 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.532 ************************************ 00:07:56.532 END TEST bdev_nbd 00:07:56.532 ************************************ 00:07:56.532 14:08:57 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:07:56.532 skipping fio tests on NVMe due to multi-ns failures. 00:07:56.532 14:08:57 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:07:56.532 14:08:57 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:07:56.532 14:08:57 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:56.532 14:08:57 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.532 14:08:57 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:56.532 14:08:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:07:56.532 14:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.532 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:07:56.532 ************************************ 00:07:56.532 START TEST bdev_verify 00:07:56.532 ************************************ 00:07:56.532 14:08:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:56.532 [2024-12-04 14:08:57.979939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.532 [2024-12-04 14:08:57.980047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62311 ] 00:07:56.791 [2024-12-04 14:08:58.125034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.049 [2024-12-04 14:08:58.264378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.049 [2024-12-04 14:08:58.264482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.307 Running I/O for 5 seconds... 00:08:02.579 00:08:02.579 Latency(us) 00:08:02.579 [2024-12-04T14:09:04.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0x5e800 00:08:02.579 Nvme0n1p1 : 5.06 2606.53 10.18 0.00 0.00 48974.33 5847.83 75820.11 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x5e800 length 0x5e800 00:08:02.579 Nvme0n1p1 : 5.06 2599.31 10.15 0.00 0.00 48968.18 8015.56 73400.32 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0x5e7ff 00:08:02.579 Nvme0n1p2 : 5.06 2604.73 10.17 0.00 0.00 48968.88 7864.32 53638.70 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:02.579 Nvme0n1p2 : 5.06 2598.66 10.15 0.00 0.00 48938.40 8267.62 60091.47 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0xa0000 00:08:02.579 Nvme1n1 : 5.06 2603.32 10.17 0.00 0.00 48934.79 9830.40 55251.89 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0xa0000 length 0xa0000 00:08:02.579 Nvme1n1 : 5.06 2598.05 10.15 0.00 0.00 48909.36 8620.50 59284.87 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0x80000 00:08:02.579 Nvme2n1 : 5.07 2602.15 10.16 0.00 0.00 48914.37 11241.94 53235.40 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x80000 length 0x80000 00:08:02.579 Nvme2n1 : 5.06 2596.39 10.14 0.00 0.00 48886.44 10788.23 56865.08 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0x80000 00:08:02.579 Nvme2n2 : 5.07 2601.55 10.16 0.00 0.00 48890.89 11645.24 54041.99 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x80000 length 0x80000 00:08:02.579 Nvme2n2 : 5.06 2595.17 10.14 0.00 0.00 48863.80 12250.19 57671.68 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0x80000 00:08:02.579 Nvme2n3 : 5.07 2600.98 10.16 0.00 0.00 48853.35 11947.72 53235.40 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x80000 length 0x80000 00:08:02.579 Nvme2n3 : 5.05 2601.04 10.16 0.00 0.00 49080.61 6604.01 57671.68 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x0 length 0x20000 00:08:02.579 Nvme3n1 : 5.07 2600.41 10.16 0.00 0.00 48823.38 10939.47 54848.59 00:08:02.579 [2024-12-04T14:09:04.044Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:02.579 Verification LBA range: start 0x20000 length 0x20000 00:08:02.579 Nvme3n1 : 5.05 2600.29 10.16 0.00 0.00 49009.66 7208.96 58478.28 00:08:02.579 [2024-12-04T14:09:04.044Z] =================================================================================================================== 00:08:02.579 [2024-12-04T14:09:04.044Z] Total : 36408.58 142.22 0.00 0.00 48929.71 5847.83 75820.11 00:08:06.825 00:08:06.825 real 0m10.265s 00:08:06.825 user 0m19.348s 00:08:06.825 sys 0m0.247s 00:08:06.825 ************************************ 00:08:06.825 END TEST bdev_verify 00:08:06.825 ************************************ 00:08:06.825 14:09:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.825 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:06.825 14:09:08 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:06.825 14:09:08 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:06.825 14:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.825 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:06.825 ************************************ 00:08:06.825 START TEST bdev_verify_big_io 00:08:06.825 ************************************ 00:08:06.825 14:09:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:07.086 [2024-12-04 14:09:08.321457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.086 [2024-12-04 14:09:08.321611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62424 ] 00:08:07.086 [2024-12-04 14:09:08.473753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:07.347 [2024-12-04 14:09:08.696888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.347 [2024-12-04 14:09:08.696973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.292 Running I/O for 5 seconds... 00:08:13.567 00:08:13.567 Latency(us) 00:08:13.567 [2024-12-04T14:09:15.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0x5e80 00:08:13.567 Nvme0n1p1 : 5.32 261.31 16.33 0.00 0.00 478728.90 72593.72 735616.39 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x5e80 length 0x5e80 00:08:13.567 Nvme0n1p1 : 5.32 277.75 17.36 0.00 0.00 450015.46 63317.86 738842.78 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0x5e7f 00:08:13.567 Nvme0n1p2 : 5.37 266.33 16.65 0.00 0.00 466114.74 43556.23 671088.64 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:13.567 Nvme0n1p2 : 5.37 283.75 17.73 0.00 0.00 436985.86 46984.27 667862.25 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0xa000 00:08:13.567 Nvme1n1 : 5.37 266.25 16.64 0.00 0.00 460215.86 44161.18 619466.44 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0xa000 length 0xa000 00:08:13.567 Nvme1n1 : 5.37 283.57 17.72 0.00 0.00 431141.73 49605.71 613013.66 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0x8000 00:08:13.567 Nvme2n1 : 5.40 273.58 17.10 0.00 0.00 444254.61 27625.94 606560.89 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x8000 length 0x8000 00:08:13.567 Nvme2n1 : 5.41 289.54 18.10 0.00 0.00 417453.74 35691.91 554938.68 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0x8000 00:08:13.567 Nvme2n2 : 5.40 273.51 17.09 0.00 0.00 438515.38 28029.24 613013.66 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x8000 length 0x8000 00:08:13.567 Nvme2n2 : 5.42 296.88 18.56 0.00 0.00 403141.49 9376.69 500090.09 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0x8000 00:08:13.567 Nvme2n3 : 5.42 289.15 18.07 0.00 0.00 411883.81 3201.18 622692.82 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x8000 length 0x8000 00:08:13.567 Nvme2n3 : 5.44 305.23 19.08 0.00 0.00 387383.29 10284.11 500090.09 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x0 length 0x2000 00:08:13.567 Nvme3n1 : 5.42 296.78 18.55 0.00 0.00 396115.51 3478.45 1013085.74 00:08:13.567 [2024-12-04T14:09:15.032Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:13.567 Verification LBA range: start 0x2000 length 0x2000 00:08:13.567 Nvme3n1 : 5.45 327.92 20.49 0.00 0.00 356152.21 1342.23 680767.80 00:08:13.567 [2024-12-04T14:09:15.032Z] =================================================================================================================== 00:08:13.567 [2024-12-04T14:09:15.032Z] Total : 3991.55 249.47 0.00 0.00 424879.79 1342.23 1013085.74 00:08:15.467 00:08:15.467 real 0m8.268s 00:08:15.467 user 0m15.337s 00:08:15.467 sys 0m0.327s 00:08:15.467 ************************************ 00:08:15.467 END TEST bdev_verify_big_io 00:08:15.467 ************************************ 00:08:15.467 14:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.467 14:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.467 14:09:16 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:15.467 14:09:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:15.467 14:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.467 14:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:15.467 ************************************ 00:08:15.467 START TEST bdev_write_zeroes 00:08:15.467 ************************************ 00:08:15.467 14:09:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:15.467 [2024-12-04 14:09:16.629629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.467 [2024-12-04 14:09:16.629735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62539 ] 00:08:15.467 [2024-12-04 14:09:16.776553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.467 [2024-12-04 14:09:16.913618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.036 Running I/O for 1 seconds... 00:08:16.976 00:08:16.976 Latency(us) 00:08:16.976 [2024-12-04T14:09:18.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme0n1p1 : 1.02 9190.90 35.90 0.00 0.00 13886.59 6553.60 24802.86 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme0n1p2 : 1.02 9179.32 35.86 0.00 0.00 13885.70 6452.78 27021.00 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme1n1 : 1.02 9168.81 35.82 0.00 0.00 13875.62 9779.99 24097.08 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme2n1 : 1.02 9158.54 35.78 0.00 0.00 13850.69 9729.58 22786.36 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme2n2 : 1.02 9148.05 35.73 0.00 0.00 13815.07 9779.99 22685.54 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme2n3 : 1.02 9194.04 35.91 0.00 0.00 13738.97 7813.91 23189.66 00:08:16.976 [2024-12-04T14:09:18.441Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:16.976 Nvme3n1 : 1.02 9183.54 35.87 0.00 0.00 13736.76 8469.27 23391.31 00:08:16.976 [2024-12-04T14:09:18.441Z] =================================================================================================================== 00:08:16.976 [2024-12-04T14:09:18.441Z] Total : 64223.20 250.87 0.00 0.00 13826.88 6452.78 27021.00 00:08:17.920 00:08:17.920 real 0m2.707s 00:08:17.920 user 0m2.418s 00:08:17.920 sys 0m0.177s 00:08:17.920 14:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.920 ************************************ 00:08:17.920 END TEST bdev_write_zeroes 00:08:17.920 ************************************ 00:08:17.920 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:08:17.920 14:09:19 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:17.920 14:09:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:17.920 14:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.920 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:08:17.920 ************************************ 00:08:17.920 START TEST bdev_json_nonenclosed 00:08:17.920 ************************************ 00:08:17.920 14:09:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.180 [2024-12-04 14:09:19.404693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.180 [2024-12-04 14:09:19.404792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62581 ] 00:08:18.180 [2024-12-04 14:09:19.549161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.441 [2024-12-04 14:09:19.726529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.441 [2024-12-04 14:09:19.726665] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:18.441 [2024-12-04 14:09:19.726683] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.702 00:08:18.702 real 0m0.663s 00:08:18.702 user 0m0.463s 00:08:18.702 sys 0m0.096s 00:08:18.702 14:09:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.702 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.702 ************************************ 00:08:18.702 END TEST bdev_json_nonenclosed 00:08:18.702 ************************************ 00:08:18.702 14:09:20 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.702 14:09:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:18.702 14:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.702 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:18.702 ************************************ 00:08:18.702 START TEST bdev_json_nonarray 00:08:18.702 ************************************ 00:08:18.702 14:09:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.702 [2024-12-04 14:09:20.125319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.702 [2024-12-04 14:09:20.125427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62612 ] 00:08:18.964 [2024-12-04 14:09:20.274660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.223 [2024-12-04 14:09:20.447942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.223 [2024-12-04 14:09:20.448099] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:19.223 [2024-12-04 14:09:20.448118] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.483 00:08:19.483 real 0m0.660s 00:08:19.483 user 0m0.464s 00:08:19.483 sys 0m0.091s 00:08:19.483 ************************************ 00:08:19.483 END TEST bdev_json_nonarray 00:08:19.483 ************************************ 00:08:19.483 14:09:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.483 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:19.483 14:09:20 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:08:19.483 14:09:20 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:08:19.483 14:09:20 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:19.483 14:09:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.483 14:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.483 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:19.483 ************************************ 00:08:19.483 START TEST bdev_gpt_uuid 00:08:19.483 ************************************ 00:08:19.483 14:09:20 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:08:19.483 14:09:20 -- bdev/blockdev.sh@612 -- # local bdev 00:08:19.483 14:09:20 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:08:19.483 14:09:20 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62643 00:08:19.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.483 14:09:20 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:19.483 14:09:20 -- bdev/blockdev.sh@47 -- # waitforlisten 62643 00:08:19.483 14:09:20 -- common/autotest_common.sh@829 -- # '[' -z 62643 ']' 00:08:19.483 14:09:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.483 14:09:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.483 14:09:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.483 14:09:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.483 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:19.483 14:09:20 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:19.483 [2024-12-04 14:09:20.857528] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.483 [2024-12-04 14:09:20.857646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62643 ] 00:08:19.743 [2024-12-04 14:09:21.006915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.743 [2024-12-04 14:09:21.182001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.743 [2024-12-04 14:09:21.182277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.201 14:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.201 14:09:22 -- common/autotest_common.sh@862 -- # return 0 00:08:21.202 14:09:22 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:21.202 14:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.202 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.202 Some configs were skipped because the RPC state that can call them passed over. 00:08:21.202 14:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.202 14:09:22 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:08:21.202 14:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.202 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.462 14:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.462 14:09:22 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:21.462 14:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.462 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.462 14:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.462 14:09:22 -- bdev/blockdev.sh@619 -- # bdev='[ 00:08:21.462 { 00:08:21.462 "name": "Nvme0n1p1", 00:08:21.462 "aliases": [ 00:08:21.462 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:21.462 ], 00:08:21.462 "product_name": "GPT Disk", 00:08:21.462 "block_size": 4096, 00:08:21.462 "num_blocks": 774144, 00:08:21.462 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:21.462 "md_size": 64, 00:08:21.462 "md_interleave": false, 00:08:21.462 "dif_type": 0, 00:08:21.462 "assigned_rate_limits": { 00:08:21.462 "rw_ios_per_sec": 0, 00:08:21.462 "rw_mbytes_per_sec": 0, 00:08:21.462 "r_mbytes_per_sec": 0, 00:08:21.462 "w_mbytes_per_sec": 0 00:08:21.462 }, 00:08:21.462 "claimed": false, 00:08:21.462 "zoned": false, 00:08:21.462 "supported_io_types": { 00:08:21.462 "read": true, 00:08:21.462 "write": true, 00:08:21.462 "unmap": true, 00:08:21.462 "write_zeroes": true, 00:08:21.462 "flush": true, 00:08:21.462 "reset": true, 00:08:21.462 "compare": true, 00:08:21.462 "compare_and_write": false, 00:08:21.462 "abort": true, 00:08:21.462 "nvme_admin": false, 00:08:21.462 "nvme_io": false 00:08:21.462 }, 00:08:21.462 "driver_specific": { 00:08:21.462 "gpt": { 00:08:21.462 "base_bdev": "Nvme0n1", 00:08:21.462 "offset_blocks": 256, 00:08:21.462 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:21.462 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:21.462 "partition_name": "SPDK_TEST_first" 00:08:21.462 } 00:08:21.462 } 00:08:21.462 } 00:08:21.462 ]' 00:08:21.462 14:09:22 -- bdev/blockdev.sh@620 -- # jq -r length 00:08:21.462 14:09:22 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:08:21.462 14:09:22 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:08:21.462 14:09:22 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:21.462 14:09:22 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:21.462 14:09:22 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:21.462 14:09:22 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:21.462 14:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.462 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:08:21.462 14:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.462 14:09:22 -- bdev/blockdev.sh@624 -- # bdev='[ 00:08:21.462 { 00:08:21.462 "name": "Nvme0n1p2", 00:08:21.462 "aliases": [ 00:08:21.462 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:21.462 ], 00:08:21.462 "product_name": "GPT Disk", 00:08:21.462 "block_size": 4096, 00:08:21.462 "num_blocks": 774143, 00:08:21.462 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:21.462 "md_size": 64, 00:08:21.462 "md_interleave": false, 00:08:21.462 "dif_type": 0, 00:08:21.462 "assigned_rate_limits": { 00:08:21.462 "rw_ios_per_sec": 0, 00:08:21.462 "rw_mbytes_per_sec": 0, 00:08:21.462 "r_mbytes_per_sec": 0, 00:08:21.462 "w_mbytes_per_sec": 0 00:08:21.462 }, 00:08:21.462 "claimed": false, 00:08:21.462 "zoned": false, 00:08:21.462 "supported_io_types": { 00:08:21.462 "read": true, 00:08:21.462 "write": true, 00:08:21.462 "unmap": true, 00:08:21.462 "write_zeroes": true, 00:08:21.462 "flush": true, 00:08:21.462 "reset": true, 00:08:21.462 "compare": true, 00:08:21.462 "compare_and_write": false, 00:08:21.462 "abort": true, 00:08:21.462 "nvme_admin": false, 00:08:21.462 "nvme_io": false 00:08:21.462 }, 00:08:21.462 "driver_specific": { 00:08:21.462 "gpt": { 00:08:21.462 "base_bdev": "Nvme0n1", 00:08:21.462 "offset_blocks": 774400, 00:08:21.462 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:21.462 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:21.463 "partition_name": "SPDK_TEST_second" 00:08:21.463 } 00:08:21.463 } 00:08:21.463 } 00:08:21.463 ]' 00:08:21.463 14:09:22 -- bdev/blockdev.sh@625 -- # jq -r length 00:08:21.463 14:09:22 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:08:21.463 14:09:22 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:08:21.463 14:09:22 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:21.463 14:09:22 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:21.463 14:09:22 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:21.463 14:09:22 -- bdev/blockdev.sh@629 -- # killprocess 62643 00:08:21.463 14:09:22 -- common/autotest_common.sh@936 -- # '[' -z 62643 ']' 00:08:21.463 14:09:22 -- common/autotest_common.sh@940 -- # kill -0 62643 00:08:21.463 14:09:22 -- common/autotest_common.sh@941 -- # uname 00:08:21.463 14:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.463 14:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62643 00:08:21.463 14:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.463 14:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.463 killing process with pid 62643 00:08:21.463 14:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62643' 00:08:21.463 14:09:22 -- common/autotest_common.sh@955 -- # kill 62643 00:08:21.463 14:09:22 -- common/autotest_common.sh@960 -- # wait 62643 00:08:23.370 00:08:23.370 real 0m3.564s 00:08:23.370 user 0m3.820s 00:08:23.370 sys 0m0.394s 00:08:23.370 ************************************ 00:08:23.371 END TEST bdev_gpt_uuid 00:08:23.371 ************************************ 00:08:23.371 14:09:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.371 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:08:23.371 14:09:24 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:08:23.371 14:09:24 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:23.371 14:09:24 -- bdev/blockdev.sh@809 -- # cleanup 00:08:23.371 14:09:24 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:23.371 14:09:24 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:23.371 14:09:24 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:08:23.371 14:09:24 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:08:23.371 14:09:24 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:08:23.371 14:09:24 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:23.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.632 Waiting for block devices as requested 00:08:23.632 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.632 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.632 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:23.894 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.181 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:29.181 14:09:30 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:08:29.181 14:09:30 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:08:29.181 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:29.181 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:08:29.181 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:29.181 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:08:29.181 14:09:30 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:08:29.181 00:08:29.181 real 0m59.838s 00:08:29.181 user 1m16.898s 00:08:29.182 sys 0m7.771s 00:08:29.182 14:09:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.182 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:08:29.182 ************************************ 00:08:29.182 END TEST blockdev_nvme_gpt 00:08:29.182 ************************************ 00:08:29.182 14:09:30 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:29.182 14:09:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.182 14:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.182 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:08:29.182 ************************************ 00:08:29.182 START TEST nvme 00:08:29.182 ************************************ 00:08:29.182 14:09:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:29.182 * Looking for test storage... 00:08:29.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:29.443 14:09:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.443 14:09:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.443 14:09:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.443 14:09:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.443 14:09:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.443 14:09:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.443 14:09:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.443 14:09:30 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.443 14:09:30 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.443 14:09:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.443 14:09:30 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.443 14:09:30 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.443 14:09:30 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.443 14:09:30 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.443 14:09:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.443 14:09:30 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.443 14:09:30 -- scripts/common.sh@344 -- # : 1 00:08:29.443 14:09:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.443 14:09:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.443 14:09:30 -- scripts/common.sh@364 -- # decimal 1 00:08:29.443 14:09:30 -- scripts/common.sh@352 -- # local d=1 00:08:29.443 14:09:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.443 14:09:30 -- scripts/common.sh@354 -- # echo 1 00:08:29.443 14:09:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.443 14:09:30 -- scripts/common.sh@365 -- # decimal 2 00:08:29.443 14:09:30 -- scripts/common.sh@352 -- # local d=2 00:08:29.443 14:09:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.443 14:09:30 -- scripts/common.sh@354 -- # echo 2 00:08:29.443 14:09:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.443 14:09:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.443 14:09:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.443 14:09:30 -- scripts/common.sh@367 -- # return 0 00:08:29.443 14:09:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.443 14:09:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.443 --rc genhtml_branch_coverage=1 00:08:29.443 --rc genhtml_function_coverage=1 00:08:29.443 --rc genhtml_legend=1 00:08:29.443 --rc geninfo_all_blocks=1 00:08:29.443 --rc geninfo_unexecuted_blocks=1 00:08:29.443 00:08:29.443 ' 00:08:29.443 14:09:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.443 --rc genhtml_branch_coverage=1 00:08:29.443 --rc genhtml_function_coverage=1 00:08:29.443 --rc genhtml_legend=1 00:08:29.443 --rc geninfo_all_blocks=1 00:08:29.443 --rc geninfo_unexecuted_blocks=1 00:08:29.443 00:08:29.443 ' 00:08:29.443 14:09:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.443 --rc genhtml_branch_coverage=1 00:08:29.443 --rc genhtml_function_coverage=1 00:08:29.443 --rc genhtml_legend=1 00:08:29.443 --rc geninfo_all_blocks=1 00:08:29.443 --rc geninfo_unexecuted_blocks=1 00:08:29.443 00:08:29.443 ' 00:08:29.443 14:09:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.443 --rc genhtml_branch_coverage=1 00:08:29.443 --rc genhtml_function_coverage=1 00:08:29.443 --rc genhtml_legend=1 00:08:29.443 --rc geninfo_all_blocks=1 00:08:29.443 --rc geninfo_unexecuted_blocks=1 00:08:29.443 00:08:29.443 ' 00:08:29.443 14:09:30 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:30.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:30.386 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.386 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.386 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.386 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:30.386 14:09:31 -- nvme/nvme.sh@79 -- # uname 00:08:30.386 14:09:31 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:30.386 14:09:31 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:30.386 14:09:31 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:30.386 14:09:31 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:30.386 14:09:31 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:08:30.386 14:09:31 -- common/autotest_common.sh@1055 -- # echo 0 00:08:30.386 14:09:31 -- common/autotest_common.sh@1057 -- # stubpid=63312 00:08:30.386 Waiting for stub to ready for secondary processes... 00:08:30.386 14:09:31 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:08:30.386 14:09:31 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:30.386 14:09:31 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63312 ]] 00:08:30.386 14:09:31 -- common/autotest_common.sh@1062 -- # sleep 1s 00:08:30.386 14:09:31 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:30.646 [2024-12-04 14:09:31.857108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.646 [2024-12-04 14:09:31.857203] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.218 [2024-12-04 14:09:32.619646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.480 [2024-12-04 14:09:32.820198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.480 14:09:32 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:31.480 [2024-12-04 14:09:32.820363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.480 [2024-12-04 14:09:32.820432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.480 14:09:32 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63312 ]] 00:08:31.480 14:09:32 -- common/autotest_common.sh@1062 -- # sleep 1s 00:08:31.480 [2024-12-04 14:09:32.843273] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.480 [2024-12-04 14:09:32.854956] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:31.480 [2024-12-04 14:09:32.855167] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:31.480 [2024-12-04 14:09:32.863466] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.480 [2024-12-04 14:09:32.863675] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:31.480 [2024-12-04 14:09:32.863805] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:31.480 [2024-12-04 14:09:32.871895] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.480 [2024-12-04 14:09:32.872133] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:31.480 [2024-12-04 14:09:32.872265] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:31.480 [2024-12-04 14:09:32.879571] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:31.480 [2024-12-04 14:09:32.879739] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:31.480 [2024-12-04 14:09:32.879855] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:31.480 [2024-12-04 14:09:32.879946] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:31.480 [2024-12-04 14:09:32.880061] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:32.423 14:09:33 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:32.423 done. 00:08:32.424 14:09:33 -- common/autotest_common.sh@1064 -- # echo done. 00:08:32.424 14:09:33 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:32.424 14:09:33 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:08:32.424 14:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.424 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:08:32.424 ************************************ 00:08:32.424 START TEST nvme_reset 00:08:32.424 ************************************ 00:08:32.424 14:09:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:32.684 Initializing NVMe Controllers 00:08:32.684 Skipping QEMU NVMe SSD at 0000:00:09.0 00:08:32.684 Skipping QEMU NVMe SSD at 0000:00:06.0 00:08:32.684 Skipping QEMU NVMe SSD at 0000:00:07.0 00:08:32.684 Skipping QEMU NVMe SSD at 0000:00:08.0 00:08:32.684 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:32.684 00:08:32.684 real 0m0.190s 00:08:32.684 user 0m0.061s 00:08:32.684 sys 0m0.091s 00:08:32.684 ************************************ 00:08:32.684 14:09:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.684 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 END TEST nvme_reset 00:08:32.684 ************************************ 00:08:32.684 14:09:34 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:32.684 14:09:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.684 14:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.684 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 ************************************ 00:08:32.684 START TEST nvme_identify 00:08:32.684 ************************************ 00:08:32.684 14:09:34 -- common/autotest_common.sh@1114 -- # nvme_identify 00:08:32.684 14:09:34 -- nvme/nvme.sh@12 -- # bdfs=() 00:08:32.684 14:09:34 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:32.684 14:09:34 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:32.684 14:09:34 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:32.684 14:09:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:32.684 14:09:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:32.684 14:09:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:32.684 14:09:34 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:32.684 14:09:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:32.684 14:09:34 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:08:32.684 14:09:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:08:32.684 14:09:34 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:32.948 [2024-12-04 14:09:34.300775] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63354 terminated unexpected 00:08:32.948 ===================================================== 00:08:32.948 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:32.948 ===================================================== 00:08:32.948 Controller Capabilities/Features 00:08:32.948 ================================ 00:08:32.948 Vendor ID: 1b36 00:08:32.948 Subsystem Vendor ID: 1af4 00:08:32.948 Serial Number: 12343 00:08:32.948 Model Number: QEMU NVMe Ctrl 00:08:32.948 Firmware Version: 8.0.0 00:08:32.948 Recommended Arb Burst: 6 00:08:32.948 IEEE OUI Identifier: 00 54 52 00:08:32.948 Multi-path I/O 00:08:32.948 May have multiple subsystem ports: No 00:08:32.948 May have multiple controllers: Yes 00:08:32.948 Associated with SR-IOV VF: No 00:08:32.948 Max Data Transfer Size: 524288 00:08:32.948 Max Number of Namespaces: 256 00:08:32.948 Max Number of I/O Queues: 64 00:08:32.948 NVMe Specification Version (VS): 1.4 00:08:32.948 NVMe Specification Version (Identify): 1.4 00:08:32.948 Maximum Queue Entries: 2048 00:08:32.948 Contiguous Queues Required: Yes 00:08:32.948 Arbitration Mechanisms Supported 00:08:32.948 Weighted Round Robin: Not Supported 00:08:32.948 Vendor Specific: Not Supported 00:08:32.948 Reset Timeout: 7500 ms 00:08:32.948 Doorbell Stride: 4 bytes 00:08:32.948 NVM Subsystem Reset: Not Supported 00:08:32.948 Command Sets Supported 00:08:32.948 NVM Command Set: Supported 00:08:32.948 Boot Partition: Not Supported 00:08:32.948 Memory Page Size Minimum: 4096 bytes 00:08:32.948 Memory Page Size Maximum: 65536 bytes 00:08:32.948 Persistent Memory Region: Not Supported 00:08:32.948 Optional Asynchronous Events Supported 00:08:32.948 Namespace Attribute Notices: Supported 00:08:32.948 Firmware Activation Notices: Not Supported 00:08:32.948 ANA Change Notices: Not Supported 00:08:32.948 PLE Aggregate Log Change Notices: Not Supported 00:08:32.948 LBA Status Info Alert Notices: Not Supported 00:08:32.948 EGE Aggregate Log Change Notices: Not Supported 00:08:32.948 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.948 Zone Descriptor Change Notices: Not Supported 00:08:32.948 Discovery Log Change Notices: Not Supported 00:08:32.948 Controller Attributes 00:08:32.948 128-bit Host Identifier: Not Supported 00:08:32.948 Non-Operational Permissive Mode: Not Supported 00:08:32.948 NVM Sets: Not Supported 00:08:32.948 Read Recovery Levels: Not Supported 00:08:32.948 Endurance Groups: Supported 00:08:32.948 Predictable Latency Mode: Not Supported 00:08:32.948 Traffic Based Keep ALive: Not Supported 00:08:32.948 Namespace Granularity: Not Supported 00:08:32.948 SQ Associations: Not Supported 00:08:32.948 UUID List: Not Supported 00:08:32.948 Multi-Domain Subsystem: Not Supported 00:08:32.948 Fixed Capacity Management: Not Supported 00:08:32.948 Variable Capacity Management: Not Supported 00:08:32.948 Delete Endurance Group: Not Supported 00:08:32.948 Delete NVM Set: Not Supported 00:08:32.948 Extended LBA Formats Supported: Supported 00:08:32.948 Flexible Data Placement Supported: Supported 00:08:32.948 00:08:32.948 Controller Memory Buffer Support 00:08:32.948 ================================ 00:08:32.948 Supported: No 00:08:32.948 00:08:32.948 Persistent Memory Region Support 00:08:32.948 ================================ 00:08:32.948 Supported: No 00:08:32.948 00:08:32.948 Admin Command Set Attributes 00:08:32.948 ============================ 00:08:32.949 Security Send/Receive: Not Supported 00:08:32.949 Format NVM: Supported 00:08:32.949 Firmware Activate/Download: Not Supported 00:08:32.949 Namespace Management: Supported 00:08:32.949 Device Self-Test: Not Supported 00:08:32.949 Directives: Supported 00:08:32.949 NVMe-MI: Not Supported 00:08:32.949 Virtualization Management: Not Supported 00:08:32.949 Doorbell Buffer Config: Supported 00:08:32.949 Get LBA Status Capability: Not Supported 00:08:32.949 Command & Feature Lockdown Capability: Not Supported 00:08:32.949 Abort Command Limit: 4 00:08:32.949 Async Event Request Limit: 4 00:08:32.949 Number of Firmware Slots: N/A 00:08:32.949 Firmware Slot 1 Read-Only: N/A 00:08:32.949 Firmware Activation Without Reset: N/A 00:08:32.949 Multiple Update Detection Support: N/A 00:08:32.949 Firmware Update Granularity: No Information Provided 00:08:32.949 Per-Namespace SMART Log: Yes 00:08:32.949 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.949 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:32.949 Command Effects Log Page: Supported 00:08:32.949 Get Log Page Extended Data: Supported 00:08:32.949 Telemetry Log Pages: Not Supported 00:08:32.949 Persistent Event Log Pages: Not Supported 00:08:32.949 Supported Log Pages Log Page: May Support 00:08:32.949 Commands Supported & Effects Log Page: Not Supported 00:08:32.949 Feature Identifiers & Effects Log Page:May Support 00:08:32.949 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.949 Data Area 4 for Telemetry Log: Not Supported 00:08:32.949 Error Log Page Entries Supported: 1 00:08:32.949 Keep Alive: Not Supported 00:08:32.949 00:08:32.949 NVM Command Set Attributes 00:08:32.949 ========================== 00:08:32.949 Submission Queue Entry Size 00:08:32.949 Max: 64 00:08:32.949 Min: 64 00:08:32.949 Completion Queue Entry Size 00:08:32.949 Max: 16 00:08:32.949 Min: 16 00:08:32.949 Number of Namespaces: 256 00:08:32.949 Compare Command: Supported 00:08:32.949 Write Uncorrectable Command: Not Supported 00:08:32.949 Dataset Management Command: Supported 00:08:32.949 Write Zeroes Command: Supported 00:08:32.949 Set Features Save Field: Supported 00:08:32.949 Reservations: Not Supported 00:08:32.949 Timestamp: Supported 00:08:32.949 Copy: Supported 00:08:32.949 Volatile Write Cache: Present 00:08:32.949 Atomic Write Unit (Normal): 1 00:08:32.949 Atomic Write Unit (PFail): 1 00:08:32.949 Atomic Compare & Write Unit: 1 00:08:32.949 Fused Compare & Write: Not Supported 00:08:32.949 Scatter-Gather List 00:08:32.949 SGL Command Set: Supported 00:08:32.949 SGL Keyed: Not Supported 00:08:32.949 SGL Bit Bucket Descriptor: Not Supported 00:08:32.949 SGL Metadata Pointer: Not Supported 00:08:32.949 Oversized SGL: Not Supported 00:08:32.949 SGL Metadata Address: Not Supported 00:08:32.949 SGL Offset: Not Supported 00:08:32.949 Transport SGL Data Block: Not Supported 00:08:32.949 Replay Protected Memory Block: Not Supported 00:08:32.949 00:08:32.949 Firmware Slot Information 00:08:32.949 ========================= 00:08:32.949 Active slot: 1 00:08:32.949 Slot 1 Firmware Revision: 1.0 00:08:32.949 00:08:32.949 00:08:32.949 Commands Supported and Effects 00:08:32.949 ============================== 00:08:32.949 Admin Commands 00:08:32.949 -------------- 00:08:32.949 Delete I/O Submission Queue (00h): Supported 00:08:32.949 Create I/O Submission Queue (01h): Supported 00:08:32.949 Get Log Page (02h): Supported 00:08:32.949 Delete I/O Completion Queue (04h): Supported 00:08:32.949 Create I/O Completion Queue (05h): Supported 00:08:32.949 Identify (06h): Supported 00:08:32.949 Abort (08h): Supported 00:08:32.949 Set Features (09h): Supported 00:08:32.949 Get Features (0Ah): Supported 00:08:32.949 Asynchronous Event Request (0Ch): Supported 00:08:32.949 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.949 Directive Send (19h): Supported 00:08:32.949 Directive Receive (1Ah): Supported 00:08:32.949 Virtualization Management (1Ch): Supported 00:08:32.949 Doorbell Buffer Config (7Ch): Supported 00:08:32.949 Format NVM (80h): Supported LBA-Change 00:08:32.949 I/O Commands 00:08:32.949 ------------ 00:08:32.949 Flush (00h): Supported LBA-Change 00:08:32.949 Write (01h): Supported LBA-Change 00:08:32.949 Read (02h): Supported 00:08:32.949 Compare (05h): Supported 00:08:32.949 Write Zeroes (08h): Supported LBA-Change 00:08:32.949 Dataset Management (09h): Supported LBA-Change 00:08:32.949 Unknown (0Ch): Supported 00:08:32.949 Unknown (12h): Supported 00:08:32.949 Copy (19h): Supported LBA-Change 00:08:32.949 Unknown (1Dh): Supported LBA-Change 00:08:32.949 00:08:32.949 Error Log 00:08:32.949 ========= 00:08:32.949 00:08:32.949 Arbitration 00:08:32.949 =========== 00:08:32.949 Arbitration Burst: no limit 00:08:32.949 00:08:32.949 Power Management 00:08:32.949 ================ 00:08:32.949 Number of Power States: 1 00:08:32.949 Current Power State: Power State #0 00:08:32.949 Power State #0: 00:08:32.949 Max Power: 25.00 W 00:08:32.949 Non-Operational State: Operational 00:08:32.949 Entry Latency: 16 microseconds 00:08:32.949 Exit Latency: 4 microseconds 00:08:32.949 Relative Read Throughput: 0 00:08:32.949 Relative Read Latency: 0 00:08:32.949 Relative Write Throughput: 0 00:08:32.949 Relative Write Latency: 0 00:08:32.949 Idle Power: Not Reported 00:08:32.949 Active Power: Not Reported 00:08:32.949 Non-Operational Permissive Mode: Not Supported 00:08:32.949 00:08:32.949 Health Information 00:08:32.949 ================== 00:08:32.949 Critical Warnings: 00:08:32.949 Available Spare Space: OK 00:08:32.949 Temperature: OK 00:08:32.949 Device Reliability: OK 00:08:32.949 Read Only: No 00:08:32.949 Volatile Memory Backup: OK 00:08:32.949 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.949 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.949 Available Spare: 0% 00:08:32.949 Available Spare Threshold: 0% 00:08:32.949 Life Percentage Used: 0% 00:08:32.949 Data Units Read: 1440 00:08:32.949 Data Units Written: 667 00:08:32.949 Host Read Commands: 59128 00:08:32.949 Host Write Commands: 28978 00:08:32.949 Controller Busy Time: 0 minutes 00:08:32.949 Power Cycles: 0 00:08:32.949 Power On Hours: 0 hours 00:08:32.949 Unsafe Shutdowns: 0 00:08:32.949 Unrecoverable Media Errors: 0 00:08:32.949 Lifetime Error Log Entries: 0 00:08:32.949 Warning Temperature Time: 0 minutes 00:08:32.949 Critical Temperature Time: 0 minutes 00:08:32.949 00:08:32.949 Number of Queues 00:08:32.949 ================ 00:08:32.949 Number of I/O Submission Queues: 64 00:08:32.949 Number of I/O Completion Queues: 64 00:08:32.949 00:08:32.949 ZNS Specific Controller Data 00:08:32.949 ============================ 00:08:32.949 Zone Append Size Limit: 0 00:08:32.949 00:08:32.949 00:08:32.949 Active Namespaces 00:08:32.949 ================= 00:08:32.949 Namespace ID:1 00:08:32.949 Error Recovery Timeout: Unlimited 00:08:32.949 Command Set Identifier: NVM (00h) 00:08:32.949 Deallocate: Supported 00:08:32.949 Deallocated/Unwritten Error: Supported 00:08:32.949 Deallocated Read Value: All 0x00 00:08:32.949 Deallocate in Write Zeroes: Not Supported 00:08:32.949 Deallocated Guard Field: 0xFFFF 00:08:32.949 Flush: Supported 00:08:32.949 Reservation: Not Supported 00:08:32.949 Namespace Sharing Capabilities: Multiple Controllers 00:08:32.949 Size (in LBAs): 262144 (1GiB) 00:08:32.949 Capacity (in LBAs): 262144 (1GiB) 00:08:32.949 Utilization (in LBAs): 262144 (1GiB) 00:08:32.949 Thin Provisioning: Not Supported 00:08:32.949 Per-NS Atomic Units: No 00:08:32.949 Maximum Single Source Range Length: 128 00:08:32.949 Maximum Copy Length: 128 00:08:32.949 Maximum Source Range Count: 128 00:08:32.949 NGUID/EUI64 Never Reused: No 00:08:32.949 Namespace Write Protected: No 00:08:32.949 Endurance group ID: 1 00:08:32.949 Number of LBA Formats: 8 00:08:32.949 Current LBA Format: LBA Format #04 00:08:32.949 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.949 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.949 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.949 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.949 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.949 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.949 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.949 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.949 00:08:32.949 Get Feature FDP: 00:08:32.949 ================ 00:08:32.949 Enabled: Yes 00:08:32.950 FDP configuration index: 0 00:08:32.950 00:08:32.950 FDP configurations log page 00:08:32.950 =========================== 00:08:32.950 Number of FDP configurations: 1 00:08:32.950 Version: 0 00:08:32.950 Size: 112 00:08:32.950 FDP Configuration Descriptor: 0 00:08:32.950 Descriptor Size: 96 00:08:32.950 Reclaim Group Identifier format: 2 00:08:32.950 FDP Volatile Write Cache: Not Present 00:08:32.950 FDP Configuration: Valid 00:08:32.950 Vendor Specific Size: 0 00:08:32.950 Number of Reclaim Groups: 2 00:08:32.950 Number of Recalim Unit Handles: 8 00:08:32.950 Max Placement Identifiers: 128 00:08:32.950 Number of Namespaces Suppprted: 256 00:08:32.950 Reclaim unit Nominal Size: 6000000 bytes 00:08:32.950 Estimated Reclaim Unit Time Limit: Not Reported 00:08:32.950 RUH Desc #000: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #001: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #002: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #003: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #004: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #005: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #006: RUH Type: Initially Isolated 00:08:32.950 RUH Desc #007: RUH Type: Initially Isolated 00:08:32.950 00:08:32.950 FDP reclaim unit handle usage log page 00:08:32.950 =================================[2024-12-04 14:09:34.302219] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63354 terminated unexpected 00:08:32.950 ===== 00:08:32.950 Number of Reclaim Unit Handles: 8 00:08:32.950 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:32.950 RUH Usage Desc #001: RUH Attributes: Unused 00:08:32.950 RUH Usage Desc #002: RUH Attributes: Unused 00:08:32.950 RUH Usage Desc #003: RUH Attributes: Unused 00:08:32.950 RUH Usage Desc #004: RUH Attributes: Unused 00:08:32.950 RUH Usage Desc #005: RUH Attributes: Unused 00:08:32.950 RUH Usage Desc #006: RUH Attributes: Unused 00:08:32.950 RUH Usage Desc #007: RUH Attributes: Unused 00:08:32.950 00:08:32.950 FDP statistics log page 00:08:32.950 ======================= 00:08:32.950 Host bytes with metadata written: 445685760 00:08:32.950 Media bytes with metadata written: 445755392 00:08:32.950 Media bytes erased: 0 00:08:32.950 00:08:32.950 FDP events log page 00:08:32.950 =================== 00:08:32.950 Number of FDP events: 0 00:08:32.950 00:08:32.950 ===================================================== 00:08:32.950 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:32.950 ===================================================== 00:08:32.950 Controller Capabilities/Features 00:08:32.950 ================================ 00:08:32.950 Vendor ID: 1b36 00:08:32.950 Subsystem Vendor ID: 1af4 00:08:32.950 Serial Number: 12340 00:08:32.950 Model Number: QEMU NVMe Ctrl 00:08:32.950 Firmware Version: 8.0.0 00:08:32.950 Recommended Arb Burst: 6 00:08:32.950 IEEE OUI Identifier: 00 54 52 00:08:32.950 Multi-path I/O 00:08:32.950 May have multiple subsystem ports: No 00:08:32.950 May have multiple controllers: No 00:08:32.950 Associated with SR-IOV VF: No 00:08:32.950 Max Data Transfer Size: 524288 00:08:32.950 Max Number of Namespaces: 256 00:08:32.950 Max Number of I/O Queues: 64 00:08:32.950 NVMe Specification Version (VS): 1.4 00:08:32.950 NVMe Specification Version (Identify): 1.4 00:08:32.950 Maximum Queue Entries: 2048 00:08:32.950 Contiguous Queues Required: Yes 00:08:32.950 Arbitration Mechanisms Supported 00:08:32.950 Weighted Round Robin: Not Supported 00:08:32.950 Vendor Specific: Not Supported 00:08:32.950 Reset Timeout: 7500 ms 00:08:32.950 Doorbell Stride: 4 bytes 00:08:32.950 NVM Subsystem Reset: Not Supported 00:08:32.950 Command Sets Supported 00:08:32.950 NVM Command Set: Supported 00:08:32.950 Boot Partition: Not Supported 00:08:32.950 Memory Page Size Minimum: 4096 bytes 00:08:32.950 Memory Page Size Maximum: 65536 bytes 00:08:32.950 Persistent Memory Region: Not Supported 00:08:32.950 Optional Asynchronous Events Supported 00:08:32.950 Namespace Attribute Notices: Supported 00:08:32.950 Firmware Activation Notices: Not Supported 00:08:32.950 ANA Change Notices: Not Supported 00:08:32.950 PLE Aggregate Log Change Notices: Not Supported 00:08:32.950 LBA Status Info Alert Notices: Not Supported 00:08:32.950 EGE Aggregate Log Change Notices: Not Supported 00:08:32.950 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.950 Zone Descriptor Change Notices: Not Supported 00:08:32.950 Discovery Log Change Notices: Not Supported 00:08:32.950 Controller Attributes 00:08:32.950 128-bit Host Identifier: Not Supported 00:08:32.950 Non-Operational Permissive Mode: Not Supported 00:08:32.950 NVM Sets: Not Supported 00:08:32.950 Read Recovery Levels: Not Supported 00:08:32.950 Endurance Groups: Not Supported 00:08:32.950 Predictable Latency Mode: Not Supported 00:08:32.950 Traffic Based Keep ALive: Not Supported 00:08:32.950 Namespace Granularity: Not Supported 00:08:32.950 SQ Associations: Not Supported 00:08:32.950 UUID List: Not Supported 00:08:32.950 Multi-Domain Subsystem: Not Supported 00:08:32.950 Fixed Capacity Management: Not Supported 00:08:32.950 Variable Capacity Management: Not Supported 00:08:32.950 Delete Endurance Group: Not Supported 00:08:32.950 Delete NVM Set: Not Supported 00:08:32.950 Extended LBA Formats Supported: Supported 00:08:32.950 Flexible Data Placement Supported: Not Supported 00:08:32.950 00:08:32.950 Controller Memory Buffer Support 00:08:32.950 ================================ 00:08:32.950 Supported: No 00:08:32.950 00:08:32.950 Persistent Memory Region Support 00:08:32.950 ================================ 00:08:32.950 Supported: No 00:08:32.950 00:08:32.950 Admin Command Set Attributes 00:08:32.950 ============================ 00:08:32.950 Security Send/Receive: Not Supported 00:08:32.950 Format NVM: Supported 00:08:32.950 Firmware Activate/Download: Not Supported 00:08:32.950 Namespace Management: Supported 00:08:32.950 Device Self-Test: Not Supported 00:08:32.950 Directives: Supported 00:08:32.950 NVMe-MI: Not Supported 00:08:32.950 Virtualization Management: Not Supported 00:08:32.950 Doorbell Buffer Config: Supported 00:08:32.950 Get LBA Status Capability: Not Supported 00:08:32.950 Command & Feature Lockdown Capability: Not Supported 00:08:32.950 Abort Command Limit: 4 00:08:32.950 Async Event Request Limit: 4 00:08:32.950 Number of Firmware Slots: N/A 00:08:32.950 Firmware Slot 1 Read-Only: N/A 00:08:32.950 Firmware Activation Without Reset: N/A 00:08:32.950 Multiple Update Detection Support: N/A 00:08:32.950 Firmware Update Granularity: No Information Provided 00:08:32.950 Per-Namespace SMART Log: Yes 00:08:32.950 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.950 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:32.950 Command Effects Log Page: Supported 00:08:32.950 Get Log Page Extended Data: Supported 00:08:32.950 Telemetry Log Pages: Not Supported 00:08:32.950 Persistent Event Log Pages: Not Supported 00:08:32.950 Supported Log Pages Log Page: May Support 00:08:32.950 Commands Supported & Effects Log Page: Not Supported 00:08:32.950 Feature Identifiers & Effects Log Page:May Support 00:08:32.950 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.950 Data Area 4 for Telemetry Log: Not Supported 00:08:32.950 Error Log Page Entries Supported: 1 00:08:32.950 Keep Alive: Not Supported 00:08:32.950 00:08:32.950 NVM Command Set Attributes 00:08:32.950 ========================== 00:08:32.950 Submission Queue Entry Size 00:08:32.950 Max: 64 00:08:32.950 Min: 64 00:08:32.950 Completion Queue Entry Size 00:08:32.950 Max: 16 00:08:32.950 Min: 16 00:08:32.950 Number of Namespaces: 256 00:08:32.950 Compare Command: Supported 00:08:32.950 Write Uncorrectable Command: Not Supported 00:08:32.950 Dataset Management Command: Supported 00:08:32.950 Write Zeroes Command: Supported 00:08:32.950 Set Features Save Field: Supported 00:08:32.950 Reservations: Not Supported 00:08:32.950 Timestamp: Supported 00:08:32.950 Copy: Supported 00:08:32.950 Volatile Write Cache: Present 00:08:32.950 Atomic Write Unit (Normal): 1 00:08:32.950 Atomic Write Unit (PFail): 1 00:08:32.950 Atomic Compare & Write Unit: 1 00:08:32.950 Fused Compare & Write: Not Supported 00:08:32.950 Scatter-Gather List 00:08:32.950 SGL Command Set: Supported 00:08:32.950 SGL Keyed: Not Supported 00:08:32.950 SGL Bit Bucket Descriptor: Not Supported 00:08:32.950 SGL Metadata Pointer: Not Supported 00:08:32.950 Oversized SGL: Not Supported 00:08:32.950 SGL Metadata Address: Not Supported 00:08:32.950 SGL Offset: Not Supported 00:08:32.950 Transport SGL Data Block: Not Supported 00:08:32.950 Replay Protected Memory Block: Not Supported 00:08:32.950 00:08:32.950 Firmware Slot Information 00:08:32.950 ========================= 00:08:32.950 Active slot: 1 00:08:32.950 Slot 1 Firmware Revision: 1.0 00:08:32.950 00:08:32.950 00:08:32.950 Commands Supported and Effects 00:08:32.950 ============================== 00:08:32.950 Admin Commands 00:08:32.950 -------------- 00:08:32.950 Delete I/O Submission Queue (00h): Supported 00:08:32.951 Create I/O Submission Queue (01h): Supported 00:08:32.951 Get Log Page (02h): Supported 00:08:32.951 Delete I/O Completion Queue (04h): Supported 00:08:32.951 Create I/O Completion Queue (05h): Supported 00:08:32.951 Identify (06h): Supported 00:08:32.951 Abort (08h): Supported 00:08:32.951 Set Features (09h): Supported 00:08:32.951 Get Features (0Ah): Supported 00:08:32.951 Asynchronous Event Request (0Ch): Supported 00:08:32.951 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.951 Directive Send (19h): Supported 00:08:32.951 Directive Receive (1Ah): Supported 00:08:32.951 Virtualization Management (1Ch): Supported 00:08:32.951 Doorbell Buffer Config (7Ch): Supported 00:08:32.951 Format NVM (80h): Supported LBA-Change 00:08:32.951 I/O Commands 00:08:32.951 ------------ 00:08:32.951 Flush (00h): Supported LBA-Change 00:08:32.951 Write (01h): Supported LBA-Change 00:08:32.951 Read (02h): Supported 00:08:32.951 Compare (05h): Supported 00:08:32.951 Write Zeroes (08h): Supported LBA-Change 00:08:32.951 Dataset Management (09h): Supported LBA-Change 00:08:32.951 Unknown (0Ch): Supported 00:08:32.951 [2024-12-04 14:09:34.302926] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63354 terminated unexpected 00:08:32.951 Unknown (12h): Supported 00:08:32.951 Copy (19h): Supported LBA-Change 00:08:32.951 Unknown (1Dh): Supported LBA-Change 00:08:32.951 00:08:32.951 Error Log 00:08:32.951 ========= 00:08:32.951 00:08:32.951 Arbitration 00:08:32.951 =========== 00:08:32.951 Arbitration Burst: no limit 00:08:32.951 00:08:32.951 Power Management 00:08:32.951 ================ 00:08:32.951 Number of Power States: 1 00:08:32.951 Current Power State: Power State #0 00:08:32.951 Power State #0: 00:08:32.951 Max Power: 25.00 W 00:08:32.951 Non-Operational State: Operational 00:08:32.951 Entry Latency: 16 microseconds 00:08:32.951 Exit Latency: 4 microseconds 00:08:32.951 Relative Read Throughput: 0 00:08:32.951 Relative Read Latency: 0 00:08:32.951 Relative Write Throughput: 0 00:08:32.951 Relative Write Latency: 0 00:08:32.951 Idle Power: Not Reported 00:08:32.951 Active Power: Not Reported 00:08:32.951 Non-Operational Permissive Mode: Not Supported 00:08:32.951 00:08:32.951 Health Information 00:08:32.951 ================== 00:08:32.951 Critical Warnings: 00:08:32.951 Available Spare Space: OK 00:08:32.951 Temperature: OK 00:08:32.951 Device Reliability: OK 00:08:32.951 Read Only: No 00:08:32.951 Volatile Memory Backup: OK 00:08:32.951 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.951 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.951 Available Spare: 0% 00:08:32.951 Available Spare Threshold: 0% 00:08:32.951 Life Percentage Used: 0% 00:08:32.951 Data Units Read: 1942 00:08:32.951 Data Units Written: 891 00:08:32.951 Host Read Commands: 87851 00:08:32.951 Host Write Commands: 43534 00:08:32.951 Controller Busy Time: 0 minutes 00:08:32.951 Power Cycles: 0 00:08:32.951 Power On Hours: 0 hours 00:08:32.951 Unsafe Shutdowns: 0 00:08:32.951 Unrecoverable Media Errors: 0 00:08:32.951 Lifetime Error Log Entries: 0 00:08:32.951 Warning Temperature Time: 0 minutes 00:08:32.951 Critical Temperature Time: 0 minutes 00:08:32.951 00:08:32.951 Number of Queues 00:08:32.951 ================ 00:08:32.951 Number of I/O Submission Queues: 64 00:08:32.951 Number of I/O Completion Queues: 64 00:08:32.951 00:08:32.951 ZNS Specific Controller Data 00:08:32.951 ============================ 00:08:32.951 Zone Append Size Limit: 0 00:08:32.951 00:08:32.951 00:08:32.951 Active Namespaces 00:08:32.951 ================= 00:08:32.951 Namespace ID:1 00:08:32.951 Error Recovery Timeout: Unlimited 00:08:32.951 Command Set Identifier: NVM (00h) 00:08:32.951 Deallocate: Supported 00:08:32.951 Deallocated/Unwritten Error: Supported 00:08:32.951 Deallocated Read Value: All 0x00 00:08:32.951 Deallocate in Write Zeroes: Not Supported 00:08:32.951 Deallocated Guard Field: 0xFFFF 00:08:32.951 Flush: Supported 00:08:32.951 Reservation: Not Supported 00:08:32.951 Metadata Transferred as: Separate Metadata Buffer 00:08:32.951 Namespace Sharing Capabilities: Private 00:08:32.951 Size (in LBAs): 1548666 (5GiB) 00:08:32.951 Capacity (in LBAs): 1548666 (5GiB) 00:08:32.951 Utilization (in LBAs): 1548666 (5GiB) 00:08:32.951 Thin Provisioning: Not Supported 00:08:32.951 Per-NS Atomic Units: No 00:08:32.951 Maximum Single Source Range Length: 128 00:08:32.951 Maximum Copy Length: 128 00:08:32.951 Maximum Source Range Count: 128 00:08:32.951 NGUID/EUI64 Never Reused: No 00:08:32.951 Namespace Write Protected: No 00:08:32.951 Number of LBA Formats: 8 00:08:32.951 Current LBA Format: LBA Format #07 00:08:32.951 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.951 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.951 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.951 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.951 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.951 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.951 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.951 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.951 00:08:32.951 ===================================================== 00:08:32.951 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:32.951 ===================================================== 00:08:32.951 Controller Capabilities/Features 00:08:32.951 ================================ 00:08:32.951 Vendor ID: 1b36 00:08:32.951 Subsystem Vendor ID: 1af4 00:08:32.951 Serial Number: 12341 00:08:32.951 Model Number: QEMU NVMe Ctrl 00:08:32.951 Firmware Version: 8.0.0 00:08:32.951 Recommended Arb Burst: 6 00:08:32.951 IEEE OUI Identifier: 00 54 52 00:08:32.951 Multi-path I/O 00:08:32.951 May have multiple subsystem ports: No 00:08:32.951 May have multiple controllers: No 00:08:32.951 Associated with SR-IOV VF: No 00:08:32.951 Max Data Transfer Size: 524288 00:08:32.951 Max Number of Namespaces: 256 00:08:32.951 Max Number of I/O Queues: 64 00:08:32.951 NVMe Specification Version (VS): 1.4 00:08:32.951 NVMe Specification Version (Identify): 1.4 00:08:32.951 Maximum Queue Entries: 2048 00:08:32.951 Contiguous Queues Required: Yes 00:08:32.951 Arbitration Mechanisms Supported 00:08:32.951 Weighted Round Robin: Not Supported 00:08:32.951 Vendor Specific: Not Supported 00:08:32.951 Reset Timeout: 7500 ms 00:08:32.951 Doorbell Stride: 4 bytes 00:08:32.951 NVM Subsystem Reset: Not Supported 00:08:32.951 Command Sets Supported 00:08:32.951 NVM Command Set: Supported 00:08:32.951 Boot Partition: Not Supported 00:08:32.951 Memory Page Size Minimum: 4096 bytes 00:08:32.951 Memory Page Size Maximum: 65536 bytes 00:08:32.951 Persistent Memory Region: Not Supported 00:08:32.951 Optional Asynchronous Events Supported 00:08:32.951 Namespace Attribute Notices: Supported 00:08:32.951 Firmware Activation Notices: Not Supported 00:08:32.951 ANA Change Notices: Not Supported 00:08:32.951 PLE Aggregate Log Change Notices: Not Supported 00:08:32.951 LBA Status Info Alert Notices: Not Supported 00:08:32.951 EGE Aggregate Log Change Notices: Not Supported 00:08:32.951 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.951 Zone Descriptor Change Notices: Not Supported 00:08:32.951 Discovery Log Change Notices: Not Supported 00:08:32.951 Controller Attributes 00:08:32.951 128-bit Host Identifier: Not Supported 00:08:32.951 Non-Operational Permissive Mode: Not Supported 00:08:32.951 NVM Sets: Not Supported 00:08:32.951 Read Recovery Levels: Not Supported 00:08:32.951 Endurance Groups: Not Supported 00:08:32.951 Predictable Latency Mode: Not Supported 00:08:32.951 Traffic Based Keep ALive: Not Supported 00:08:32.951 Namespace Granularity: Not Supported 00:08:32.951 SQ Associations: Not Supported 00:08:32.951 UUID List: Not Supported 00:08:32.951 Multi-Domain Subsystem: Not Supported 00:08:32.951 Fixed Capacity Management: Not Supported 00:08:32.951 Variable Capacity Management: Not Supported 00:08:32.951 Delete Endurance Group: Not Supported 00:08:32.951 Delete NVM Set: Not Supported 00:08:32.951 Extended LBA Formats Supported: Supported 00:08:32.951 Flexible Data Placement Supported: Not Supported 00:08:32.951 00:08:32.951 Controller Memory Buffer Support 00:08:32.951 ================================ 00:08:32.951 Supported: No 00:08:32.951 00:08:32.951 Persistent Memory Region Support 00:08:32.951 ================================ 00:08:32.951 Supported: No 00:08:32.951 00:08:32.951 Admin Command Set Attributes 00:08:32.951 ============================ 00:08:32.951 Security Send/Receive: Not Supported 00:08:32.951 Format NVM: Supported 00:08:32.951 Firmware Activate/Download: Not Supported 00:08:32.951 Namespace Management: Supported 00:08:32.951 Device Self-Test: Not Supported 00:08:32.952 Directives: Supported 00:08:32.952 NVMe-MI: Not Supported 00:08:32.952 Virtualization Management: Not Supported 00:08:32.952 Doorbell Buffer Config: Supported 00:08:32.952 Get LBA Status Capability: Not Supported 00:08:32.952 Command & Feature Lockdown Capability: Not Supported 00:08:32.952 Abort Command Limit: 4 00:08:32.952 Async Event Request Limit: 4 00:08:32.952 Number of Firmware Slots: N/A 00:08:32.952 Firmware Slot 1 Read-Only: N/A 00:08:32.952 Firmware Activation Without Reset: N/A 00:08:32.952 Multiple Update Detection Support: N/A 00:08:32.952 Firmware Update Granularity: No Information Provided 00:08:32.952 Per-Namespace SMART Log: Yes 00:08:32.952 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.952 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:32.952 Command Effects Log Page: Supported 00:08:32.952 Get Log Page Extended Data: Supported 00:08:32.952 Telemetry Log Pages: Not Supported 00:08:32.952 Persistent Event Log Pages: Not Supported 00:08:32.952 Supported Log Pages Log Page: May Support 00:08:32.952 Commands Supported & Effects Log Page: Not Supported 00:08:32.952 Feature Identifiers & Effects Log Page:May Support 00:08:32.952 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.952 Data Area 4 for Telemetry Log: Not Supported 00:08:32.952 Error Log Page Entries Supported: 1 00:08:32.952 Keep Alive: Not Supported 00:08:32.952 00:08:32.952 NVM Command Set Attributes 00:08:32.952 ========================== 00:08:32.952 Submission Queue Entry Size 00:08:32.952 Max: 64 00:08:32.952 Min: 64 00:08:32.952 Completion Queue Entry Size 00:08:32.952 Max: 16 00:08:32.952 Min: 16 00:08:32.952 Number of Namespaces: 256 00:08:32.952 Compare Command: Supported 00:08:32.952 Write Uncorrectable Command: Not Supported 00:08:32.952 Dataset Management Command: Supported 00:08:32.952 Write Zeroes Command: Supported 00:08:32.952 Set Features Save Field: Supported 00:08:32.952 Reservations: Not Supported 00:08:32.952 Timestamp: Supported 00:08:32.952 Copy: Supported 00:08:32.952 Volatile Write Cache: Present 00:08:32.952 Atomic Write Unit (Normal): 1 00:08:32.952 Atomic Write Unit (PFail): 1 00:08:32.952 Atomic Compare & Write Unit: 1 00:08:32.952 Fused Compare & Write: Not Supported 00:08:32.952 Scatter-Gather List 00:08:32.952 SGL Command Set: Supported 00:08:32.952 SGL Keyed: Not Supported 00:08:32.952 SGL Bit Bucket Descriptor: Not Supported 00:08:32.952 SGL Metadata Pointer: Not Supported 00:08:32.952 Oversized SGL: Not Supported 00:08:32.952 SGL Metadata Address: Not Supported 00:08:32.952 SGL Offset: Not Supported 00:08:32.952 Transport SGL Data Block: Not Supported 00:08:32.952 Replay Protected Memory Block: Not Supported 00:08:32.952 00:08:32.952 Firmware Slot Information 00:08:32.952 ========================= 00:08:32.952 Active slot: 1 00:08:32.952 Slot 1 Firmware Revision: 1.0 00:08:32.952 00:08:32.952 00:08:32.952 Commands Supported and Effects 00:08:32.952 ============================== 00:08:32.952 Admin Commands 00:08:32.952 -------------- 00:08:32.952 Delete I/O Submission Queue (00h): Supported 00:08:32.952 Create I/O Submission Queue (01h): Supported 00:08:32.952 Get Log Page (02h): Supported 00:08:32.952 Delete I/O Completion Queue (04h): Supported 00:08:32.952 Create I/O Completion Queue (05h): Supported 00:08:32.952 Identify (06h): Supported 00:08:32.952 Abort (08h): Supported 00:08:32.952 Set Features (09h): Supported 00:08:32.952 Get Features (0Ah): Supported 00:08:32.952 Asynchronous Event Request (0Ch): Supported 00:08:32.952 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.952 Directive Send (19h): Supported 00:08:32.952 Directive Receive (1Ah): Supported 00:08:32.952 Virtualization Management (1Ch): Supported 00:08:32.952 Doorbell Buffer Config (7Ch): Supported 00:08:32.952 Format NVM (80h): Supported LBA-Change 00:08:32.952 I/O Commands 00:08:32.952 ------------ 00:08:32.952 Flush (00h): Supported LBA-Change 00:08:32.952 Write (01h): Supported LBA-Change 00:08:32.952 Read (02h): Supported 00:08:32.952 Compare (05h): Supported 00:08:32.952 Write Zeroes (08h): Supported LBA-Change 00:08:32.952 Dataset Management (09h): Supported LBA-Change 00:08:32.952 Unknown (0Ch): Supported 00:08:32.952 Unknown (12h): Supported 00:08:32.952 Copy (19h): Supported LBA-Change 00:08:32.952 Unknown (1Dh): Supported LBA-Change 00:08:32.952 00:08:32.952 Error Log 00:08:32.952 ========= 00:08:32.952 00:08:32.952 Arbitration 00:08:32.952 =========== 00:08:32.952 Arbitration Burst: no limit 00:08:32.952 00:08:32.952 Power Management 00:08:32.952 ================ 00:08:32.952 Number of Power States: 1 00:08:32.952 Current Power State: Power State #0 00:08:32.952 Power State #0: 00:08:32.952 Max Power: 25.00 W 00:08:32.952 Non-Operational State: Operational 00:08:32.952 Entry Latency: 16 microseconds 00:08:32.952 Exit Latency: 4 microseconds 00:08:32.952 Relative Read Throughput: 0 00:08:32.952 Relative Read Latency: 0 00:08:32.952 Relative Write Throughput: 0 00:08:32.952 Relative Write Latency: 0 00:08:32.952 Idle Power: Not Reported 00:08:32.952 Active Power: Not Reported 00:08:32.952 Non-Operational Permissive Mode: Not Supported 00:08:32.952 00:08:32.952 Health Information 00:08:32.952 ================== 00:08:32.952 Critical Warnings: 00:08:32.952 Available Spare Space: OK 00:08:32.952 Temperature: OK 00:08:32.952 Device Reliability: OK 00:08:32.952 Read Only: No 00:08:32.952 Volatile Memory Backup: OK 00:08:32.952 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.952 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.952 Available Spare: 0% 00:08:32.952 Available Spare Threshold: 0% 00:08:32.952 Life Percentage Used: 0% 00:08:32.952 Data Units Read: 1306 00:08:32.952 Data Units Written: [2024-12-04 14:09:34.303553] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63354 terminated unexpected 00:08:32.952 601 00:08:32.952 Host Read Commands: 58043 00:08:32.952 Host Write Commands: 28446 00:08:32.952 Controller Busy Time: 0 minutes 00:08:32.952 Power Cycles: 0 00:08:32.952 Power On Hours: 0 hours 00:08:32.952 Unsafe Shutdowns: 0 00:08:32.952 Unrecoverable Media Errors: 0 00:08:32.952 Lifetime Error Log Entries: 0 00:08:32.952 Warning Temperature Time: 0 minutes 00:08:32.952 Critical Temperature Time: 0 minutes 00:08:32.952 00:08:32.952 Number of Queues 00:08:32.952 ================ 00:08:32.952 Number of I/O Submission Queues: 64 00:08:32.952 Number of I/O Completion Queues: 64 00:08:32.952 00:08:32.952 ZNS Specific Controller Data 00:08:32.952 ============================ 00:08:32.952 Zone Append Size Limit: 0 00:08:32.952 00:08:32.952 00:08:32.952 Active Namespaces 00:08:32.952 ================= 00:08:32.952 Namespace ID:1 00:08:32.952 Error Recovery Timeout: Unlimited 00:08:32.952 Command Set Identifier: NVM (00h) 00:08:32.952 Deallocate: Supported 00:08:32.952 Deallocated/Unwritten Error: Supported 00:08:32.952 Deallocated Read Value: All 0x00 00:08:32.952 Deallocate in Write Zeroes: Not Supported 00:08:32.952 Deallocated Guard Field: 0xFFFF 00:08:32.952 Flush: Supported 00:08:32.952 Reservation: Not Supported 00:08:32.952 Namespace Sharing Capabilities: Private 00:08:32.952 Size (in LBAs): 1310720 (5GiB) 00:08:32.952 Capacity (in LBAs): 1310720 (5GiB) 00:08:32.952 Utilization (in LBAs): 1310720 (5GiB) 00:08:32.952 Thin Provisioning: Not Supported 00:08:32.952 Per-NS Atomic Units: No 00:08:32.952 Maximum Single Source Range Length: 128 00:08:32.952 Maximum Copy Length: 128 00:08:32.952 Maximum Source Range Count: 128 00:08:32.952 NGUID/EUI64 Never Reused: No 00:08:32.952 Namespace Write Protected: No 00:08:32.952 Number of LBA Formats: 8 00:08:32.952 Current LBA Format: LBA Format #04 00:08:32.952 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.952 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.952 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.952 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.952 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.952 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.952 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.952 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.952 00:08:32.952 ===================================================== 00:08:32.952 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:32.952 ===================================================== 00:08:32.952 Controller Capabilities/Features 00:08:32.952 ================================ 00:08:32.952 Vendor ID: 1b36 00:08:32.952 Subsystem Vendor ID: 1af4 00:08:32.952 Serial Number: 12342 00:08:32.952 Model Number: QEMU NVMe Ctrl 00:08:32.952 Firmware Version: 8.0.0 00:08:32.952 Recommended Arb Burst: 6 00:08:32.952 IEEE OUI Identifier: 00 54 52 00:08:32.952 Multi-path I/O 00:08:32.953 May have multiple subsystem ports: No 00:08:32.953 May have multiple controllers: No 00:08:32.953 Associated with SR-IOV VF: No 00:08:32.953 Max Data Transfer Size: 524288 00:08:32.953 Max Number of Namespaces: 256 00:08:32.953 Max Number of I/O Queues: 64 00:08:32.953 NVMe Specification Version (VS): 1.4 00:08:32.953 NVMe Specification Version (Identify): 1.4 00:08:32.953 Maximum Queue Entries: 2048 00:08:32.953 Contiguous Queues Required: Yes 00:08:32.953 Arbitration Mechanisms Supported 00:08:32.953 Weighted Round Robin: Not Supported 00:08:32.953 Vendor Specific: Not Supported 00:08:32.953 Reset Timeout: 7500 ms 00:08:32.953 Doorbell Stride: 4 bytes 00:08:32.953 NVM Subsystem Reset: Not Supported 00:08:32.953 Command Sets Supported 00:08:32.953 NVM Command Set: Supported 00:08:32.953 Boot Partition: Not Supported 00:08:32.953 Memory Page Size Minimum: 4096 bytes 00:08:32.953 Memory Page Size Maximum: 65536 bytes 00:08:32.953 Persistent Memory Region: Not Supported 00:08:32.953 Optional Asynchronous Events Supported 00:08:32.953 Namespace Attribute Notices: Supported 00:08:32.953 Firmware Activation Notices: Not Supported 00:08:32.953 ANA Change Notices: Not Supported 00:08:32.953 PLE Aggregate Log Change Notices: Not Supported 00:08:32.953 LBA Status Info Alert Notices: Not Supported 00:08:32.953 EGE Aggregate Log Change Notices: Not Supported 00:08:32.953 Normal NVM Subsystem Shutdown event: Not Supported 00:08:32.953 Zone Descriptor Change Notices: Not Supported 00:08:32.953 Discovery Log Change Notices: Not Supported 00:08:32.953 Controller Attributes 00:08:32.953 128-bit Host Identifier: Not Supported 00:08:32.953 Non-Operational Permissive Mode: Not Supported 00:08:32.953 NVM Sets: Not Supported 00:08:32.953 Read Recovery Levels: Not Supported 00:08:32.953 Endurance Groups: Not Supported 00:08:32.953 Predictable Latency Mode: Not Supported 00:08:32.953 Traffic Based Keep ALive: Not Supported 00:08:32.953 Namespace Granularity: Not Supported 00:08:32.953 SQ Associations: Not Supported 00:08:32.953 UUID List: Not Supported 00:08:32.953 Multi-Domain Subsystem: Not Supported 00:08:32.953 Fixed Capacity Management: Not Supported 00:08:32.953 Variable Capacity Management: Not Supported 00:08:32.953 Delete Endurance Group: Not Supported 00:08:32.953 Delete NVM Set: Not Supported 00:08:32.953 Extended LBA Formats Supported: Supported 00:08:32.953 Flexible Data Placement Supported: Not Supported 00:08:32.953 00:08:32.953 Controller Memory Buffer Support 00:08:32.953 ================================ 00:08:32.953 Supported: No 00:08:32.953 00:08:32.953 Persistent Memory Region Support 00:08:32.953 ================================ 00:08:32.953 Supported: No 00:08:32.953 00:08:32.953 Admin Command Set Attributes 00:08:32.953 ============================ 00:08:32.953 Security Send/Receive: Not Supported 00:08:32.953 Format NVM: Supported 00:08:32.953 Firmware Activate/Download: Not Supported 00:08:32.953 Namespace Management: Supported 00:08:32.953 Device Self-Test: Not Supported 00:08:32.953 Directives: Supported 00:08:32.953 NVMe-MI: Not Supported 00:08:32.953 Virtualization Management: Not Supported 00:08:32.953 Doorbell Buffer Config: Supported 00:08:32.953 Get LBA Status Capability: Not Supported 00:08:32.953 Command & Feature Lockdown Capability: Not Supported 00:08:32.953 Abort Command Limit: 4 00:08:32.953 Async Event Request Limit: 4 00:08:32.953 Number of Firmware Slots: N/A 00:08:32.953 Firmware Slot 1 Read-Only: N/A 00:08:32.953 Firmware Activation Without Reset: N/A 00:08:32.953 Multiple Update Detection Support: N/A 00:08:32.953 Firmware Update Granularity: No Information Provided 00:08:32.953 Per-Namespace SMART Log: Yes 00:08:32.953 Asymmetric Namespace Access Log Page: Not Supported 00:08:32.953 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:32.953 Command Effects Log Page: Supported 00:08:32.953 Get Log Page Extended Data: Supported 00:08:32.953 Telemetry Log Pages: Not Supported 00:08:32.953 Persistent Event Log Pages: Not Supported 00:08:32.953 Supported Log Pages Log Page: May Support 00:08:32.953 Commands Supported & Effects Log Page: Not Supported 00:08:32.953 Feature Identifiers & Effects Log Page:May Support 00:08:32.953 NVMe-MI Commands & Effects Log Page: May Support 00:08:32.953 Data Area 4 for Telemetry Log: Not Supported 00:08:32.953 Error Log Page Entries Supported: 1 00:08:32.953 Keep Alive: Not Supported 00:08:32.953 00:08:32.953 NVM Command Set Attributes 00:08:32.953 ========================== 00:08:32.953 Submission Queue Entry Size 00:08:32.953 Max: 64 00:08:32.953 Min: 64 00:08:32.953 Completion Queue Entry Size 00:08:32.953 Max: 16 00:08:32.953 Min: 16 00:08:32.953 Number of Namespaces: 256 00:08:32.953 Compare Command: Supported 00:08:32.953 Write Uncorrectable Command: Not Supported 00:08:32.953 Dataset Management Command: Supported 00:08:32.953 Write Zeroes Command: Supported 00:08:32.953 Set Features Save Field: Supported 00:08:32.953 Reservations: Not Supported 00:08:32.953 Timestamp: Supported 00:08:32.953 Copy: Supported 00:08:32.953 Volatile Write Cache: Present 00:08:32.953 Atomic Write Unit (Normal): 1 00:08:32.953 Atomic Write Unit (PFail): 1 00:08:32.953 Atomic Compare & Write Unit: 1 00:08:32.953 Fused Compare & Write: Not Supported 00:08:32.953 Scatter-Gather List 00:08:32.953 SGL Command Set: Supported 00:08:32.953 SGL Keyed: Not Supported 00:08:32.953 SGL Bit Bucket Descriptor: Not Supported 00:08:32.953 SGL Metadata Pointer: Not Supported 00:08:32.953 Oversized SGL: Not Supported 00:08:32.953 SGL Metadata Address: Not Supported 00:08:32.953 SGL Offset: Not Supported 00:08:32.953 Transport SGL Data Block: Not Supported 00:08:32.953 Replay Protected Memory Block: Not Supported 00:08:32.953 00:08:32.953 Firmware Slot Information 00:08:32.953 ========================= 00:08:32.953 Active slot: 1 00:08:32.953 Slot 1 Firmware Revision: 1.0 00:08:32.953 00:08:32.953 00:08:32.953 Commands Supported and Effects 00:08:32.953 ============================== 00:08:32.953 Admin Commands 00:08:32.953 -------------- 00:08:32.953 Delete I/O Submission Queue (00h): Supported 00:08:32.953 Create I/O Submission Queue (01h): Supported 00:08:32.953 Get Log Page (02h): Supported 00:08:32.953 Delete I/O Completion Queue (04h): Supported 00:08:32.953 Create I/O Completion Queue (05h): Supported 00:08:32.953 Identify (06h): Supported 00:08:32.953 Abort (08h): Supported 00:08:32.953 Set Features (09h): Supported 00:08:32.953 Get Features (0Ah): Supported 00:08:32.953 Asynchronous Event Request (0Ch): Supported 00:08:32.953 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:32.953 Directive Send (19h): Supported 00:08:32.953 Directive Receive (1Ah): Supported 00:08:32.953 Virtualization Management (1Ch): Supported 00:08:32.953 Doorbell Buffer Config (7Ch): Supported 00:08:32.953 Format NVM (80h): Supported LBA-Change 00:08:32.953 I/O Commands 00:08:32.953 ------------ 00:08:32.953 Flush (00h): Supported LBA-Change 00:08:32.953 Write (01h): Supported LBA-Change 00:08:32.953 Read (02h): Supported 00:08:32.953 Compare (05h): Supported 00:08:32.953 Write Zeroes (08h): Supported LBA-Change 00:08:32.953 Dataset Management (09h): Supported LBA-Change 00:08:32.953 Unknown (0Ch): Supported 00:08:32.953 Unknown (12h): Supported 00:08:32.953 Copy (19h): Supported LBA-Change 00:08:32.954 Unknown (1Dh): Supported LBA-Change 00:08:32.954 00:08:32.954 Error Log 00:08:32.954 ========= 00:08:32.954 00:08:32.954 Arbitration 00:08:32.954 =========== 00:08:32.954 Arbitration Burst: no limit 00:08:32.954 00:08:32.954 Power Management 00:08:32.954 ================ 00:08:32.954 Number of Power States: 1 00:08:32.954 Current Power State: Power State #0 00:08:32.954 Power State #0: 00:08:32.954 Max Power: 25.00 W 00:08:32.954 Non-Operational State: Operational 00:08:32.954 Entry Latency: 16 microseconds 00:08:32.954 Exit Latency: 4 microseconds 00:08:32.954 Relative Read Throughput: 0 00:08:32.954 Relative Read Latency: 0 00:08:32.954 Relative Write Throughput: 0 00:08:32.954 Relative Write Latency: 0 00:08:32.954 Idle Power: Not Reported 00:08:32.954 Active Power: Not Reported 00:08:32.954 Non-Operational Permissive Mode: Not Supported 00:08:32.954 00:08:32.954 Health Information 00:08:32.954 ================== 00:08:32.954 Critical Warnings: 00:08:32.954 Available Spare Space: OK 00:08:32.954 Temperature: OK 00:08:32.954 Device Reliability: OK 00:08:32.954 Read Only: No 00:08:32.954 Volatile Memory Backup: OK 00:08:32.954 Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.954 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:32.954 Available Spare: 0% 00:08:32.954 Available Spare Threshold: 0% 00:08:32.954 Life Percentage Used: 0% 00:08:32.954 Data Units Read: 4058 00:08:32.954 Data Units Written: 1863 00:08:32.954 Host Read Commands: 175678 00:08:32.954 Host Write Commands: 85918 00:08:32.954 Controller Busy Time: 0 minutes 00:08:32.954 Power Cycles: 0 00:08:32.954 Power On Hours: 0 hours 00:08:32.954 Unsafe Shutdowns: 0 00:08:32.954 Unrecoverable Media Errors: 0 00:08:32.954 Lifetime Error Log Entries: 0 00:08:32.954 Warning Temperature Time: 0 minutes 00:08:32.954 Critical Temperature Time: 0 minutes 00:08:32.954 00:08:32.954 Number of Queues 00:08:32.954 ================ 00:08:32.954 Number of I/O Submission Queues: 64 00:08:32.954 Number of I/O Completion Queues: 64 00:08:32.954 00:08:32.954 ZNS Specific Controller Data 00:08:32.954 ============================ 00:08:32.954 Zone Append Size Limit: 0 00:08:32.954 00:08:32.954 00:08:32.954 Active Namespaces 00:08:32.954 ================= 00:08:32.954 Namespace ID:1 00:08:32.954 Error Recovery Timeout: Unlimited 00:08:32.954 Command Set Identifier: NVM (00h) 00:08:32.954 Deallocate: Supported 00:08:32.954 Deallocated/Unwritten Error: Supported 00:08:32.954 Deallocated Read Value: All 0x00 00:08:32.954 Deallocate in Write Zeroes: Not Supported 00:08:32.954 Deallocated Guard Field: 0xFFFF 00:08:32.954 Flush: Supported 00:08:32.954 Reservation: Not Supported 00:08:32.954 Namespace Sharing Capabilities: Private 00:08:32.954 Size (in LBAs): 1048576 (4GiB) 00:08:32.954 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.954 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.954 Thin Provisioning: Not Supported 00:08:32.954 Per-NS Atomic Units: No 00:08:32.954 Maximum Single Source Range Length: 128 00:08:32.954 Maximum Copy Length: 128 00:08:32.954 Maximum Source Range Count: 128 00:08:32.954 NGUID/EUI64 Never Reused: No 00:08:32.954 Namespace Write Protected: No 00:08:32.954 Number of LBA Formats: 8 00:08:32.954 Current LBA Format: LBA Format #04 00:08:32.954 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.954 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.954 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.954 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.954 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.954 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.954 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.954 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.954 00:08:32.954 Namespace ID:2 00:08:32.954 Error Recovery Timeout: Unlimited 00:08:32.954 Command Set Identifier: NVM (00h) 00:08:32.954 Deallocate: Supported 00:08:32.954 Deallocated/Unwritten Error: Supported 00:08:32.954 Deallocated Read Value: All 0x00 00:08:32.954 Deallocate in Write Zeroes: Not Supported 00:08:32.954 Deallocated Guard Field: 0xFFFF 00:08:32.954 Flush: Supported 00:08:32.954 Reservation: Not Supported 00:08:32.954 Namespace Sharing Capabilities: Private 00:08:32.954 Size (in LBAs): 1048576 (4GiB) 00:08:32.954 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.954 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.954 Thin Provisioning: Not Supported 00:08:32.954 Per-NS Atomic Units: No 00:08:32.954 Maximum Single Source Range Length: 128 00:08:32.954 Maximum Copy Length: 128 00:08:32.954 Maximum Source Range Count: 128 00:08:32.954 NGUID/EUI64 Never Reused: No 00:08:32.954 Namespace Write Protected: No 00:08:32.954 Number of LBA Formats: 8 00:08:32.954 Current LBA Format: LBA Format #04 00:08:32.954 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.954 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.954 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.954 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.954 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.954 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.954 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.954 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.954 00:08:32.954 Namespace ID:3 00:08:32.954 Error Recovery Timeout: Unlimited 00:08:32.954 Command Set Identifier: NVM (00h) 00:08:32.954 Deallocate: Supported 00:08:32.954 Deallocated/Unwritten Error: Supported 00:08:32.954 Deallocated Read Value: All 0x00 00:08:32.954 Deallocate in Write Zeroes: Not Supported 00:08:32.954 Deallocated Guard Field: 0xFFFF 00:08:32.954 Flush: Supported 00:08:32.954 Reservation: Not Supported 00:08:32.954 Namespace Sharing Capabilities: Private 00:08:32.954 Size (in LBAs): 1048576 (4GiB) 00:08:32.954 Capacity (in LBAs): 1048576 (4GiB) 00:08:32.954 Utilization (in LBAs): 1048576 (4GiB) 00:08:32.954 Thin Provisioning: Not Supported 00:08:32.954 Per-NS Atomic Units: No 00:08:32.954 Maximum Single Source Range Length: 128 00:08:32.954 Maximum Copy Length: 128 00:08:32.954 Maximum Source Range Count: 128 00:08:32.954 NGUID/EUI64 Never Reused: No 00:08:32.954 Namespace Write Protected: No 00:08:32.954 Number of LBA Formats: 8 00:08:32.954 Current LBA Format: LBA Format #04 00:08:32.954 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:32.954 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:32.954 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:32.954 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:32.954 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:32.954 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:32.954 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:32.954 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:32.954 00:08:32.954 14:09:34 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:32.954 14:09:34 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:08:33.214 ===================================================== 00:08:33.214 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:33.214 ===================================================== 00:08:33.214 Controller Capabilities/Features 00:08:33.214 ================================ 00:08:33.214 Vendor ID: 1b36 00:08:33.214 Subsystem Vendor ID: 1af4 00:08:33.214 Serial Number: 12340 00:08:33.214 Model Number: QEMU NVMe Ctrl 00:08:33.214 Firmware Version: 8.0.0 00:08:33.214 Recommended Arb Burst: 6 00:08:33.214 IEEE OUI Identifier: 00 54 52 00:08:33.214 Multi-path I/O 00:08:33.214 May have multiple subsystem ports: No 00:08:33.214 May have multiple controllers: No 00:08:33.214 Associated with SR-IOV VF: No 00:08:33.214 Max Data Transfer Size: 524288 00:08:33.214 Max Number of Namespaces: 256 00:08:33.214 Max Number of I/O Queues: 64 00:08:33.214 NVMe Specification Version (VS): 1.4 00:08:33.214 NVMe Specification Version (Identify): 1.4 00:08:33.214 Maximum Queue Entries: 2048 00:08:33.214 Contiguous Queues Required: Yes 00:08:33.214 Arbitration Mechanisms Supported 00:08:33.214 Weighted Round Robin: Not Supported 00:08:33.214 Vendor Specific: Not Supported 00:08:33.214 Reset Timeout: 7500 ms 00:08:33.214 Doorbell Stride: 4 bytes 00:08:33.214 NVM Subsystem Reset: Not Supported 00:08:33.214 Command Sets Supported 00:08:33.214 NVM Command Set: Supported 00:08:33.214 Boot Partition: Not Supported 00:08:33.214 Memory Page Size Minimum: 4096 bytes 00:08:33.214 Memory Page Size Maximum: 65536 bytes 00:08:33.214 Persistent Memory Region: Not Supported 00:08:33.214 Optional Asynchronous Events Supported 00:08:33.214 Namespace Attribute Notices: Supported 00:08:33.214 Firmware Activation Notices: Not Supported 00:08:33.214 ANA Change Notices: Not Supported 00:08:33.214 PLE Aggregate Log Change Notices: Not Supported 00:08:33.214 LBA Status Info Alert Notices: Not Supported 00:08:33.214 EGE Aggregate Log Change Notices: Not Supported 00:08:33.214 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.214 Zone Descriptor Change Notices: Not Supported 00:08:33.214 Discovery Log Change Notices: Not Supported 00:08:33.214 Controller Attributes 00:08:33.214 128-bit Host Identifier: Not Supported 00:08:33.214 Non-Operational Permissive Mode: Not Supported 00:08:33.214 NVM Sets: Not Supported 00:08:33.214 Read Recovery Levels: Not Supported 00:08:33.214 Endurance Groups: Not Supported 00:08:33.214 Predictable Latency Mode: Not Supported 00:08:33.214 Traffic Based Keep ALive: Not Supported 00:08:33.214 Namespace Granularity: Not Supported 00:08:33.214 SQ Associations: Not Supported 00:08:33.214 UUID List: Not Supported 00:08:33.214 Multi-Domain Subsystem: Not Supported 00:08:33.214 Fixed Capacity Management: Not Supported 00:08:33.214 Variable Capacity Management: Not Supported 00:08:33.214 Delete Endurance Group: Not Supported 00:08:33.214 Delete NVM Set: Not Supported 00:08:33.214 Extended LBA Formats Supported: Supported 00:08:33.214 Flexible Data Placement Supported: Not Supported 00:08:33.214 00:08:33.214 Controller Memory Buffer Support 00:08:33.215 ================================ 00:08:33.215 Supported: No 00:08:33.215 00:08:33.215 Persistent Memory Region Support 00:08:33.215 ================================ 00:08:33.215 Supported: No 00:08:33.215 00:08:33.215 Admin Command Set Attributes 00:08:33.215 ============================ 00:08:33.215 Security Send/Receive: Not Supported 00:08:33.215 Format NVM: Supported 00:08:33.215 Firmware Activate/Download: Not Supported 00:08:33.215 Namespace Management: Supported 00:08:33.215 Device Self-Test: Not Supported 00:08:33.215 Directives: Supported 00:08:33.215 NVMe-MI: Not Supported 00:08:33.215 Virtualization Management: Not Supported 00:08:33.215 Doorbell Buffer Config: Supported 00:08:33.215 Get LBA Status Capability: Not Supported 00:08:33.215 Command & Feature Lockdown Capability: Not Supported 00:08:33.215 Abort Command Limit: 4 00:08:33.215 Async Event Request Limit: 4 00:08:33.215 Number of Firmware Slots: N/A 00:08:33.215 Firmware Slot 1 Read-Only: N/A 00:08:33.215 Firmware Activation Without Reset: N/A 00:08:33.215 Multiple Update Detection Support: N/A 00:08:33.215 Firmware Update Granularity: No Information Provided 00:08:33.215 Per-Namespace SMART Log: Yes 00:08:33.215 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.215 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:33.215 Command Effects Log Page: Supported 00:08:33.215 Get Log Page Extended Data: Supported 00:08:33.215 Telemetry Log Pages: Not Supported 00:08:33.215 Persistent Event Log Pages: Not Supported 00:08:33.215 Supported Log Pages Log Page: May Support 00:08:33.215 Commands Supported & Effects Log Page: Not Supported 00:08:33.215 Feature Identifiers & Effects Log Page:May Support 00:08:33.215 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.215 Data Area 4 for Telemetry Log: Not Supported 00:08:33.215 Error Log Page Entries Supported: 1 00:08:33.215 Keep Alive: Not Supported 00:08:33.215 00:08:33.215 NVM Command Set Attributes 00:08:33.215 ========================== 00:08:33.215 Submission Queue Entry Size 00:08:33.215 Max: 64 00:08:33.215 Min: 64 00:08:33.215 Completion Queue Entry Size 00:08:33.215 Max: 16 00:08:33.215 Min: 16 00:08:33.215 Number of Namespaces: 256 00:08:33.215 Compare Command: Supported 00:08:33.215 Write Uncorrectable Command: Not Supported 00:08:33.215 Dataset Management Command: Supported 00:08:33.215 Write Zeroes Command: Supported 00:08:33.215 Set Features Save Field: Supported 00:08:33.215 Reservations: Not Supported 00:08:33.215 Timestamp: Supported 00:08:33.215 Copy: Supported 00:08:33.215 Volatile Write Cache: Present 00:08:33.215 Atomic Write Unit (Normal): 1 00:08:33.215 Atomic Write Unit (PFail): 1 00:08:33.215 Atomic Compare & Write Unit: 1 00:08:33.215 Fused Compare & Write: Not Supported 00:08:33.215 Scatter-Gather List 00:08:33.215 SGL Command Set: Supported 00:08:33.215 SGL Keyed: Not Supported 00:08:33.215 SGL Bit Bucket Descriptor: Not Supported 00:08:33.215 SGL Metadata Pointer: Not Supported 00:08:33.215 Oversized SGL: Not Supported 00:08:33.215 SGL Metadata Address: Not Supported 00:08:33.215 SGL Offset: Not Supported 00:08:33.215 Transport SGL Data Block: Not Supported 00:08:33.215 Replay Protected Memory Block: Not Supported 00:08:33.215 00:08:33.215 Firmware Slot Information 00:08:33.215 ========================= 00:08:33.215 Active slot: 1 00:08:33.215 Slot 1 Firmware Revision: 1.0 00:08:33.215 00:08:33.215 00:08:33.215 Commands Supported and Effects 00:08:33.215 ============================== 00:08:33.215 Admin Commands 00:08:33.215 -------------- 00:08:33.215 Delete I/O Submission Queue (00h): Supported 00:08:33.215 Create I/O Submission Queue (01h): Supported 00:08:33.215 Get Log Page (02h): Supported 00:08:33.215 Delete I/O Completion Queue (04h): Supported 00:08:33.215 Create I/O Completion Queue (05h): Supported 00:08:33.215 Identify (06h): Supported 00:08:33.215 Abort (08h): Supported 00:08:33.215 Set Features (09h): Supported 00:08:33.215 Get Features (0Ah): Supported 00:08:33.215 Asynchronous Event Request (0Ch): Supported 00:08:33.215 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.215 Directive Send (19h): Supported 00:08:33.215 Directive Receive (1Ah): Supported 00:08:33.215 Virtualization Management (1Ch): Supported 00:08:33.215 Doorbell Buffer Config (7Ch): Supported 00:08:33.215 Format NVM (80h): Supported LBA-Change 00:08:33.215 I/O Commands 00:08:33.215 ------------ 00:08:33.215 Flush (00h): Supported LBA-Change 00:08:33.215 Write (01h): Supported LBA-Change 00:08:33.215 Read (02h): Supported 00:08:33.215 Compare (05h): Supported 00:08:33.215 Write Zeroes (08h): Supported LBA-Change 00:08:33.215 Dataset Management (09h): Supported LBA-Change 00:08:33.215 Unknown (0Ch): Supported 00:08:33.215 Unknown (12h): Supported 00:08:33.215 Copy (19h): Supported LBA-Change 00:08:33.215 Unknown (1Dh): Supported LBA-Change 00:08:33.215 00:08:33.215 Error Log 00:08:33.215 ========= 00:08:33.215 00:08:33.215 Arbitration 00:08:33.215 =========== 00:08:33.215 Arbitration Burst: no limit 00:08:33.215 00:08:33.215 Power Management 00:08:33.215 ================ 00:08:33.215 Number of Power States: 1 00:08:33.215 Current Power State: Power State #0 00:08:33.215 Power State #0: 00:08:33.215 Max Power: 25.00 W 00:08:33.215 Non-Operational State: Operational 00:08:33.215 Entry Latency: 16 microseconds 00:08:33.215 Exit Latency: 4 microseconds 00:08:33.215 Relative Read Throughput: 0 00:08:33.215 Relative Read Latency: 0 00:08:33.215 Relative Write Throughput: 0 00:08:33.215 Relative Write Latency: 0 00:08:33.215 Idle Power: Not Reported 00:08:33.215 Active Power: Not Reported 00:08:33.215 Non-Operational Permissive Mode: Not Supported 00:08:33.215 00:08:33.215 Health Information 00:08:33.215 ================== 00:08:33.215 Critical Warnings: 00:08:33.215 Available Spare Space: OK 00:08:33.215 Temperature: OK 00:08:33.215 Device Reliability: OK 00:08:33.215 Read Only: No 00:08:33.215 Volatile Memory Backup: OK 00:08:33.215 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.215 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.215 Available Spare: 0% 00:08:33.215 Available Spare Threshold: 0% 00:08:33.215 Life Percentage Used: 0% 00:08:33.215 Data Units Read: 1942 00:08:33.215 Data Units Written: 891 00:08:33.215 Host Read Commands: 87851 00:08:33.215 Host Write Commands: 43534 00:08:33.215 Controller Busy Time: 0 minutes 00:08:33.215 Power Cycles: 0 00:08:33.215 Power On Hours: 0 hours 00:08:33.215 Unsafe Shutdowns: 0 00:08:33.215 Unrecoverable Media Errors: 0 00:08:33.215 Lifetime Error Log Entries: 0 00:08:33.215 Warning Temperature Time: 0 minutes 00:08:33.215 Critical Temperature Time: 0 minutes 00:08:33.215 00:08:33.215 Number of Queues 00:08:33.215 ================ 00:08:33.215 Number of I/O Submission Queues: 64 00:08:33.215 Number of I/O Completion Queues: 64 00:08:33.215 00:08:33.215 ZNS Specific Controller Data 00:08:33.215 ============================ 00:08:33.215 Zone Append Size Limit: 0 00:08:33.215 00:08:33.215 00:08:33.215 Active Namespaces 00:08:33.215 ================= 00:08:33.215 Namespace ID:1 00:08:33.215 Error Recovery Timeout: Unlimited 00:08:33.215 Command Set Identifier: NVM (00h) 00:08:33.215 Deallocate: Supported 00:08:33.215 Deallocated/Unwritten Error: Supported 00:08:33.215 Deallocated Read Value: All 0x00 00:08:33.215 Deallocate in Write Zeroes: Not Supported 00:08:33.215 Deallocated Guard Field: 0xFFFF 00:08:33.215 Flush: Supported 00:08:33.215 Reservation: Not Supported 00:08:33.215 Metadata Transferred as: Separate Metadata Buffer 00:08:33.215 Namespace Sharing Capabilities: Private 00:08:33.215 Size (in LBAs): 1548666 (5GiB) 00:08:33.215 Capacity (in LBAs): 1548666 (5GiB) 00:08:33.215 Utilization (in LBAs): 1548666 (5GiB) 00:08:33.215 Thin Provisioning: Not Supported 00:08:33.215 Per-NS Atomic Units: No 00:08:33.215 Maximum Single Source Range Length: 128 00:08:33.215 Maximum Copy Length: 128 00:08:33.215 Maximum Source Range Count: 128 00:08:33.215 NGUID/EUI64 Never Reused: No 00:08:33.215 Namespace Write Protected: No 00:08:33.215 Number of LBA Formats: 8 00:08:33.215 Current LBA Format: LBA Format #07 00:08:33.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.215 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.215 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.215 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.216 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.216 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.216 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.216 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.216 00:08:33.216 14:09:34 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.216 14:09:34 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:08:33.476 ===================================================== 00:08:33.476 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:33.476 ===================================================== 00:08:33.476 Controller Capabilities/Features 00:08:33.476 ================================ 00:08:33.476 Vendor ID: 1b36 00:08:33.476 Subsystem Vendor ID: 1af4 00:08:33.476 Serial Number: 12341 00:08:33.476 Model Number: QEMU NVMe Ctrl 00:08:33.476 Firmware Version: 8.0.0 00:08:33.476 Recommended Arb Burst: 6 00:08:33.476 IEEE OUI Identifier: 00 54 52 00:08:33.476 Multi-path I/O 00:08:33.476 May have multiple subsystem ports: No 00:08:33.476 May have multiple controllers: No 00:08:33.476 Associated with SR-IOV VF: No 00:08:33.476 Max Data Transfer Size: 524288 00:08:33.476 Max Number of Namespaces: 256 00:08:33.476 Max Number of I/O Queues: 64 00:08:33.476 NVMe Specification Version (VS): 1.4 00:08:33.476 NVMe Specification Version (Identify): 1.4 00:08:33.476 Maximum Queue Entries: 2048 00:08:33.476 Contiguous Queues Required: Yes 00:08:33.476 Arbitration Mechanisms Supported 00:08:33.476 Weighted Round Robin: Not Supported 00:08:33.476 Vendor Specific: Not Supported 00:08:33.476 Reset Timeout: 7500 ms 00:08:33.476 Doorbell Stride: 4 bytes 00:08:33.476 NVM Subsystem Reset: Not Supported 00:08:33.476 Command Sets Supported 00:08:33.476 NVM Command Set: Supported 00:08:33.476 Boot Partition: Not Supported 00:08:33.476 Memory Page Size Minimum: 4096 bytes 00:08:33.476 Memory Page Size Maximum: 65536 bytes 00:08:33.476 Persistent Memory Region: Not Supported 00:08:33.476 Optional Asynchronous Events Supported 00:08:33.476 Namespace Attribute Notices: Supported 00:08:33.476 Firmware Activation Notices: Not Supported 00:08:33.476 ANA Change Notices: Not Supported 00:08:33.476 PLE Aggregate Log Change Notices: Not Supported 00:08:33.476 LBA Status Info Alert Notices: Not Supported 00:08:33.476 EGE Aggregate Log Change Notices: Not Supported 00:08:33.476 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.476 Zone Descriptor Change Notices: Not Supported 00:08:33.476 Discovery Log Change Notices: Not Supported 00:08:33.476 Controller Attributes 00:08:33.476 128-bit Host Identifier: Not Supported 00:08:33.476 Non-Operational Permissive Mode: Not Supported 00:08:33.476 NVM Sets: Not Supported 00:08:33.476 Read Recovery Levels: Not Supported 00:08:33.476 Endurance Groups: Not Supported 00:08:33.476 Predictable Latency Mode: Not Supported 00:08:33.476 Traffic Based Keep ALive: Not Supported 00:08:33.476 Namespace Granularity: Not Supported 00:08:33.476 SQ Associations: Not Supported 00:08:33.476 UUID List: Not Supported 00:08:33.476 Multi-Domain Subsystem: Not Supported 00:08:33.476 Fixed Capacity Management: Not Supported 00:08:33.476 Variable Capacity Management: Not Supported 00:08:33.476 Delete Endurance Group: Not Supported 00:08:33.476 Delete NVM Set: Not Supported 00:08:33.476 Extended LBA Formats Supported: Supported 00:08:33.476 Flexible Data Placement Supported: Not Supported 00:08:33.476 00:08:33.476 Controller Memory Buffer Support 00:08:33.476 ================================ 00:08:33.476 Supported: No 00:08:33.476 00:08:33.476 Persistent Memory Region Support 00:08:33.476 ================================ 00:08:33.476 Supported: No 00:08:33.476 00:08:33.476 Admin Command Set Attributes 00:08:33.476 ============================ 00:08:33.476 Security Send/Receive: Not Supported 00:08:33.476 Format NVM: Supported 00:08:33.476 Firmware Activate/Download: Not Supported 00:08:33.476 Namespace Management: Supported 00:08:33.476 Device Self-Test: Not Supported 00:08:33.476 Directives: Supported 00:08:33.476 NVMe-MI: Not Supported 00:08:33.476 Virtualization Management: Not Supported 00:08:33.476 Doorbell Buffer Config: Supported 00:08:33.476 Get LBA Status Capability: Not Supported 00:08:33.476 Command & Feature Lockdown Capability: Not Supported 00:08:33.476 Abort Command Limit: 4 00:08:33.476 Async Event Request Limit: 4 00:08:33.476 Number of Firmware Slots: N/A 00:08:33.476 Firmware Slot 1 Read-Only: N/A 00:08:33.476 Firmware Activation Without Reset: N/A 00:08:33.476 Multiple Update Detection Support: N/A 00:08:33.476 Firmware Update Granularity: No Information Provided 00:08:33.476 Per-Namespace SMART Log: Yes 00:08:33.476 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.476 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:33.476 Command Effects Log Page: Supported 00:08:33.476 Get Log Page Extended Data: Supported 00:08:33.476 Telemetry Log Pages: Not Supported 00:08:33.476 Persistent Event Log Pages: Not Supported 00:08:33.476 Supported Log Pages Log Page: May Support 00:08:33.476 Commands Supported & Effects Log Page: Not Supported 00:08:33.476 Feature Identifiers & Effects Log Page:May Support 00:08:33.476 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.476 Data Area 4 for Telemetry Log: Not Supported 00:08:33.476 Error Log Page Entries Supported: 1 00:08:33.476 Keep Alive: Not Supported 00:08:33.476 00:08:33.476 NVM Command Set Attributes 00:08:33.476 ========================== 00:08:33.476 Submission Queue Entry Size 00:08:33.476 Max: 64 00:08:33.476 Min: 64 00:08:33.476 Completion Queue Entry Size 00:08:33.476 Max: 16 00:08:33.476 Min: 16 00:08:33.476 Number of Namespaces: 256 00:08:33.476 Compare Command: Supported 00:08:33.476 Write Uncorrectable Command: Not Supported 00:08:33.476 Dataset Management Command: Supported 00:08:33.476 Write Zeroes Command: Supported 00:08:33.476 Set Features Save Field: Supported 00:08:33.476 Reservations: Not Supported 00:08:33.476 Timestamp: Supported 00:08:33.476 Copy: Supported 00:08:33.476 Volatile Write Cache: Present 00:08:33.476 Atomic Write Unit (Normal): 1 00:08:33.476 Atomic Write Unit (PFail): 1 00:08:33.476 Atomic Compare & Write Unit: 1 00:08:33.476 Fused Compare & Write: Not Supported 00:08:33.476 Scatter-Gather List 00:08:33.476 SGL Command Set: Supported 00:08:33.476 SGL Keyed: Not Supported 00:08:33.476 SGL Bit Bucket Descriptor: Not Supported 00:08:33.476 SGL Metadata Pointer: Not Supported 00:08:33.476 Oversized SGL: Not Supported 00:08:33.476 SGL Metadata Address: Not Supported 00:08:33.476 SGL Offset: Not Supported 00:08:33.476 Transport SGL Data Block: Not Supported 00:08:33.477 Replay Protected Memory Block: Not Supported 00:08:33.477 00:08:33.477 Firmware Slot Information 00:08:33.477 ========================= 00:08:33.477 Active slot: 1 00:08:33.477 Slot 1 Firmware Revision: 1.0 00:08:33.477 00:08:33.477 00:08:33.477 Commands Supported and Effects 00:08:33.477 ============================== 00:08:33.477 Admin Commands 00:08:33.477 -------------- 00:08:33.477 Delete I/O Submission Queue (00h): Supported 00:08:33.477 Create I/O Submission Queue (01h): Supported 00:08:33.477 Get Log Page (02h): Supported 00:08:33.477 Delete I/O Completion Queue (04h): Supported 00:08:33.477 Create I/O Completion Queue (05h): Supported 00:08:33.477 Identify (06h): Supported 00:08:33.477 Abort (08h): Supported 00:08:33.477 Set Features (09h): Supported 00:08:33.477 Get Features (0Ah): Supported 00:08:33.477 Asynchronous Event Request (0Ch): Supported 00:08:33.477 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.477 Directive Send (19h): Supported 00:08:33.477 Directive Receive (1Ah): Supported 00:08:33.477 Virtualization Management (1Ch): Supported 00:08:33.477 Doorbell Buffer Config (7Ch): Supported 00:08:33.477 Format NVM (80h): Supported LBA-Change 00:08:33.477 I/O Commands 00:08:33.477 ------------ 00:08:33.477 Flush (00h): Supported LBA-Change 00:08:33.477 Write (01h): Supported LBA-Change 00:08:33.477 Read (02h): Supported 00:08:33.477 Compare (05h): Supported 00:08:33.477 Write Zeroes (08h): Supported LBA-Change 00:08:33.477 Dataset Management (09h): Supported LBA-Change 00:08:33.477 Unknown (0Ch): Supported 00:08:33.477 Unknown (12h): Supported 00:08:33.477 Copy (19h): Supported LBA-Change 00:08:33.477 Unknown (1Dh): Supported LBA-Change 00:08:33.477 00:08:33.477 Error Log 00:08:33.477 ========= 00:08:33.477 00:08:33.477 Arbitration 00:08:33.477 =========== 00:08:33.477 Arbitration Burst: no limit 00:08:33.477 00:08:33.477 Power Management 00:08:33.477 ================ 00:08:33.477 Number of Power States: 1 00:08:33.477 Current Power State: Power State #0 00:08:33.477 Power State #0: 00:08:33.477 Max Power: 25.00 W 00:08:33.477 Non-Operational State: Operational 00:08:33.477 Entry Latency: 16 microseconds 00:08:33.477 Exit Latency: 4 microseconds 00:08:33.477 Relative Read Throughput: 0 00:08:33.477 Relative Read Latency: 0 00:08:33.477 Relative Write Throughput: 0 00:08:33.477 Relative Write Latency: 0 00:08:33.477 Idle Power: Not Reported 00:08:33.477 Active Power: Not Reported 00:08:33.477 Non-Operational Permissive Mode: Not Supported 00:08:33.477 00:08:33.477 Health Information 00:08:33.477 ================== 00:08:33.477 Critical Warnings: 00:08:33.477 Available Spare Space: OK 00:08:33.477 Temperature: OK 00:08:33.477 Device Reliability: OK 00:08:33.477 Read Only: No 00:08:33.477 Volatile Memory Backup: OK 00:08:33.477 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.477 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.477 Available Spare: 0% 00:08:33.477 Available Spare Threshold: 0% 00:08:33.477 Life Percentage Used: 0% 00:08:33.477 Data Units Read: 1306 00:08:33.477 Data Units Written: 601 00:08:33.477 Host Read Commands: 58043 00:08:33.477 Host Write Commands: 28446 00:08:33.477 Controller Busy Time: 0 minutes 00:08:33.477 Power Cycles: 0 00:08:33.477 Power On Hours: 0 hours 00:08:33.477 Unsafe Shutdowns: 0 00:08:33.477 Unrecoverable Media Errors: 0 00:08:33.477 Lifetime Error Log Entries: 0 00:08:33.477 Warning Temperature Time: 0 minutes 00:08:33.477 Critical Temperature Time: 0 minutes 00:08:33.477 00:08:33.477 Number of Queues 00:08:33.477 ================ 00:08:33.477 Number of I/O Submission Queues: 64 00:08:33.477 Number of I/O Completion Queues: 64 00:08:33.477 00:08:33.477 ZNS Specific Controller Data 00:08:33.477 ============================ 00:08:33.477 Zone Append Size Limit: 0 00:08:33.477 00:08:33.477 00:08:33.477 Active Namespaces 00:08:33.477 ================= 00:08:33.477 Namespace ID:1 00:08:33.477 Error Recovery Timeout: Unlimited 00:08:33.477 Command Set Identifier: NVM (00h) 00:08:33.477 Deallocate: Supported 00:08:33.477 Deallocated/Unwritten Error: Supported 00:08:33.477 Deallocated Read Value: All 0x00 00:08:33.477 Deallocate in Write Zeroes: Not Supported 00:08:33.477 Deallocated Guard Field: 0xFFFF 00:08:33.477 Flush: Supported 00:08:33.477 Reservation: Not Supported 00:08:33.477 Namespace Sharing Capabilities: Private 00:08:33.477 Size (in LBAs): 1310720 (5GiB) 00:08:33.477 Capacity (in LBAs): 1310720 (5GiB) 00:08:33.477 Utilization (in LBAs): 1310720 (5GiB) 00:08:33.477 Thin Provisioning: Not Supported 00:08:33.477 Per-NS Atomic Units: No 00:08:33.477 Maximum Single Source Range Length: 128 00:08:33.477 Maximum Copy Length: 128 00:08:33.477 Maximum Source Range Count: 128 00:08:33.477 NGUID/EUI64 Never Reused: No 00:08:33.477 Namespace Write Protected: No 00:08:33.477 Number of LBA Formats: 8 00:08:33.477 Current LBA Format: LBA Format #04 00:08:33.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.477 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.477 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.477 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.477 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.477 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.477 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.477 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.477 00:08:33.477 14:09:34 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.477 14:09:34 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:08:33.477 ===================================================== 00:08:33.477 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:33.477 ===================================================== 00:08:33.477 Controller Capabilities/Features 00:08:33.477 ================================ 00:08:33.477 Vendor ID: 1b36 00:08:33.477 Subsystem Vendor ID: 1af4 00:08:33.477 Serial Number: 12342 00:08:33.477 Model Number: QEMU NVMe Ctrl 00:08:33.477 Firmware Version: 8.0.0 00:08:33.477 Recommended Arb Burst: 6 00:08:33.477 IEEE OUI Identifier: 00 54 52 00:08:33.477 Multi-path I/O 00:08:33.477 May have multiple subsystem ports: No 00:08:33.477 May have multiple controllers: No 00:08:33.477 Associated with SR-IOV VF: No 00:08:33.477 Max Data Transfer Size: 524288 00:08:33.477 Max Number of Namespaces: 256 00:08:33.477 Max Number of I/O Queues: 64 00:08:33.477 NVMe Specification Version (VS): 1.4 00:08:33.477 NVMe Specification Version (Identify): 1.4 00:08:33.477 Maximum Queue Entries: 2048 00:08:33.477 Contiguous Queues Required: Yes 00:08:33.477 Arbitration Mechanisms Supported 00:08:33.477 Weighted Round Robin: Not Supported 00:08:33.477 Vendor Specific: Not Supported 00:08:33.477 Reset Timeout: 7500 ms 00:08:33.477 Doorbell Stride: 4 bytes 00:08:33.477 NVM Subsystem Reset: Not Supported 00:08:33.477 Command Sets Supported 00:08:33.477 NVM Command Set: Supported 00:08:33.477 Boot Partition: Not Supported 00:08:33.477 Memory Page Size Minimum: 4096 bytes 00:08:33.477 Memory Page Size Maximum: 65536 bytes 00:08:33.477 Persistent Memory Region: Not Supported 00:08:33.477 Optional Asynchronous Events Supported 00:08:33.477 Namespace Attribute Notices: Supported 00:08:33.477 Firmware Activation Notices: Not Supported 00:08:33.477 ANA Change Notices: Not Supported 00:08:33.477 PLE Aggregate Log Change Notices: Not Supported 00:08:33.477 LBA Status Info Alert Notices: Not Supported 00:08:33.477 EGE Aggregate Log Change Notices: Not Supported 00:08:33.477 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.477 Zone Descriptor Change Notices: Not Supported 00:08:33.477 Discovery Log Change Notices: Not Supported 00:08:33.477 Controller Attributes 00:08:33.477 128-bit Host Identifier: Not Supported 00:08:33.477 Non-Operational Permissive Mode: Not Supported 00:08:33.477 NVM Sets: Not Supported 00:08:33.477 Read Recovery Levels: Not Supported 00:08:33.477 Endurance Groups: Not Supported 00:08:33.477 Predictable Latency Mode: Not Supported 00:08:33.477 Traffic Based Keep ALive: Not Supported 00:08:33.477 Namespace Granularity: Not Supported 00:08:33.477 SQ Associations: Not Supported 00:08:33.477 UUID List: Not Supported 00:08:33.477 Multi-Domain Subsystem: Not Supported 00:08:33.477 Fixed Capacity Management: Not Supported 00:08:33.477 Variable Capacity Management: Not Supported 00:08:33.477 Delete Endurance Group: Not Supported 00:08:33.477 Delete NVM Set: Not Supported 00:08:33.477 Extended LBA Formats Supported: Supported 00:08:33.477 Flexible Data Placement Supported: Not Supported 00:08:33.477 00:08:33.477 Controller Memory Buffer Support 00:08:33.477 ================================ 00:08:33.477 Supported: No 00:08:33.477 00:08:33.478 Persistent Memory Region Support 00:08:33.478 ================================ 00:08:33.478 Supported: No 00:08:33.478 00:08:33.478 Admin Command Set Attributes 00:08:33.478 ============================ 00:08:33.478 Security Send/Receive: Not Supported 00:08:33.478 Format NVM: Supported 00:08:33.478 Firmware Activate/Download: Not Supported 00:08:33.478 Namespace Management: Supported 00:08:33.478 Device Self-Test: Not Supported 00:08:33.478 Directives: Supported 00:08:33.478 NVMe-MI: Not Supported 00:08:33.478 Virtualization Management: Not Supported 00:08:33.478 Doorbell Buffer Config: Supported 00:08:33.478 Get LBA Status Capability: Not Supported 00:08:33.478 Command & Feature Lockdown Capability: Not Supported 00:08:33.478 Abort Command Limit: 4 00:08:33.478 Async Event Request Limit: 4 00:08:33.478 Number of Firmware Slots: N/A 00:08:33.478 Firmware Slot 1 Read-Only: N/A 00:08:33.478 Firmware Activation Without Reset: N/A 00:08:33.478 Multiple Update Detection Support: N/A 00:08:33.478 Firmware Update Granularity: No Information Provided 00:08:33.478 Per-Namespace SMART Log: Yes 00:08:33.478 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.478 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:33.478 Command Effects Log Page: Supported 00:08:33.478 Get Log Page Extended Data: Supported 00:08:33.478 Telemetry Log Pages: Not Supported 00:08:33.478 Persistent Event Log Pages: Not Supported 00:08:33.478 Supported Log Pages Log Page: May Support 00:08:33.478 Commands Supported & Effects Log Page: Not Supported 00:08:33.478 Feature Identifiers & Effects Log Page:May Support 00:08:33.478 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.478 Data Area 4 for Telemetry Log: Not Supported 00:08:33.478 Error Log Page Entries Supported: 1 00:08:33.478 Keep Alive: Not Supported 00:08:33.478 00:08:33.478 NVM Command Set Attributes 00:08:33.478 ========================== 00:08:33.478 Submission Queue Entry Size 00:08:33.478 Max: 64 00:08:33.478 Min: 64 00:08:33.478 Completion Queue Entry Size 00:08:33.478 Max: 16 00:08:33.478 Min: 16 00:08:33.478 Number of Namespaces: 256 00:08:33.478 Compare Command: Supported 00:08:33.478 Write Uncorrectable Command: Not Supported 00:08:33.478 Dataset Management Command: Supported 00:08:33.478 Write Zeroes Command: Supported 00:08:33.478 Set Features Save Field: Supported 00:08:33.478 Reservations: Not Supported 00:08:33.478 Timestamp: Supported 00:08:33.478 Copy: Supported 00:08:33.478 Volatile Write Cache: Present 00:08:33.478 Atomic Write Unit (Normal): 1 00:08:33.478 Atomic Write Unit (PFail): 1 00:08:33.478 Atomic Compare & Write Unit: 1 00:08:33.478 Fused Compare & Write: Not Supported 00:08:33.478 Scatter-Gather List 00:08:33.478 SGL Command Set: Supported 00:08:33.478 SGL Keyed: Not Supported 00:08:33.478 SGL Bit Bucket Descriptor: Not Supported 00:08:33.478 SGL Metadata Pointer: Not Supported 00:08:33.478 Oversized SGL: Not Supported 00:08:33.478 SGL Metadata Address: Not Supported 00:08:33.478 SGL Offset: Not Supported 00:08:33.478 Transport SGL Data Block: Not Supported 00:08:33.478 Replay Protected Memory Block: Not Supported 00:08:33.478 00:08:33.478 Firmware Slot Information 00:08:33.478 ========================= 00:08:33.478 Active slot: 1 00:08:33.478 Slot 1 Firmware Revision: 1.0 00:08:33.478 00:08:33.478 00:08:33.478 Commands Supported and Effects 00:08:33.478 ============================== 00:08:33.478 Admin Commands 00:08:33.478 -------------- 00:08:33.478 Delete I/O Submission Queue (00h): Supported 00:08:33.478 Create I/O Submission Queue (01h): Supported 00:08:33.478 Get Log Page (02h): Supported 00:08:33.478 Delete I/O Completion Queue (04h): Supported 00:08:33.478 Create I/O Completion Queue (05h): Supported 00:08:33.478 Identify (06h): Supported 00:08:33.478 Abort (08h): Supported 00:08:33.478 Set Features (09h): Supported 00:08:33.478 Get Features (0Ah): Supported 00:08:33.478 Asynchronous Event Request (0Ch): Supported 00:08:33.478 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.478 Directive Send (19h): Supported 00:08:33.478 Directive Receive (1Ah): Supported 00:08:33.478 Virtualization Management (1Ch): Supported 00:08:33.478 Doorbell Buffer Config (7Ch): Supported 00:08:33.478 Format NVM (80h): Supported LBA-Change 00:08:33.478 I/O Commands 00:08:33.478 ------------ 00:08:33.478 Flush (00h): Supported LBA-Change 00:08:33.478 Write (01h): Supported LBA-Change 00:08:33.478 Read (02h): Supported 00:08:33.478 Compare (05h): Supported 00:08:33.478 Write Zeroes (08h): Supported LBA-Change 00:08:33.478 Dataset Management (09h): Supported LBA-Change 00:08:33.478 Unknown (0Ch): Supported 00:08:33.478 Unknown (12h): Supported 00:08:33.478 Copy (19h): Supported LBA-Change 00:08:33.478 Unknown (1Dh): Supported LBA-Change 00:08:33.478 00:08:33.478 Error Log 00:08:33.478 ========= 00:08:33.478 00:08:33.478 Arbitration 00:08:33.478 =========== 00:08:33.478 Arbitration Burst: no limit 00:08:33.478 00:08:33.478 Power Management 00:08:33.478 ================ 00:08:33.478 Number of Power States: 1 00:08:33.478 Current Power State: Power State #0 00:08:33.478 Power State #0: 00:08:33.478 Max Power: 25.00 W 00:08:33.478 Non-Operational State: Operational 00:08:33.478 Entry Latency: 16 microseconds 00:08:33.478 Exit Latency: 4 microseconds 00:08:33.478 Relative Read Throughput: 0 00:08:33.478 Relative Read Latency: 0 00:08:33.478 Relative Write Throughput: 0 00:08:33.478 Relative Write Latency: 0 00:08:33.478 Idle Power: Not Reported 00:08:33.478 Active Power: Not Reported 00:08:33.478 Non-Operational Permissive Mode: Not Supported 00:08:33.478 00:08:33.478 Health Information 00:08:33.478 ================== 00:08:33.478 Critical Warnings: 00:08:33.478 Available Spare Space: OK 00:08:33.478 Temperature: OK 00:08:33.478 Device Reliability: OK 00:08:33.478 Read Only: No 00:08:33.478 Volatile Memory Backup: OK 00:08:33.478 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.478 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.478 Available Spare: 0% 00:08:33.478 Available Spare Threshold: 0% 00:08:33.478 Life Percentage Used: 0% 00:08:33.478 Data Units Read: 4058 00:08:33.478 Data Units Written: 1863 00:08:33.478 Host Read Commands: 175678 00:08:33.478 Host Write Commands: 85918 00:08:33.478 Controller Busy Time: 0 minutes 00:08:33.478 Power Cycles: 0 00:08:33.478 Power On Hours: 0 hours 00:08:33.478 Unsafe Shutdowns: 0 00:08:33.478 Unrecoverable Media Errors: 0 00:08:33.478 Lifetime Error Log Entries: 0 00:08:33.478 Warning Temperature Time: 0 minutes 00:08:33.478 Critical Temperature Time: 0 minutes 00:08:33.478 00:08:33.478 Number of Queues 00:08:33.478 ================ 00:08:33.478 Number of I/O Submission Queues: 64 00:08:33.478 Number of I/O Completion Queues: 64 00:08:33.478 00:08:33.478 ZNS Specific Controller Data 00:08:33.478 ============================ 00:08:33.478 Zone Append Size Limit: 0 00:08:33.478 00:08:33.478 00:08:33.478 Active Namespaces 00:08:33.478 ================= 00:08:33.478 Namespace ID:1 00:08:33.478 Error Recovery Timeout: Unlimited 00:08:33.478 Command Set Identifier: NVM (00h) 00:08:33.478 Deallocate: Supported 00:08:33.478 Deallocated/Unwritten Error: Supported 00:08:33.478 Deallocated Read Value: All 0x00 00:08:33.478 Deallocate in Write Zeroes: Not Supported 00:08:33.478 Deallocated Guard Field: 0xFFFF 00:08:33.478 Flush: Supported 00:08:33.478 Reservation: Not Supported 00:08:33.478 Namespace Sharing Capabilities: Private 00:08:33.478 Size (in LBAs): 1048576 (4GiB) 00:08:33.478 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.478 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.478 Thin Provisioning: Not Supported 00:08:33.478 Per-NS Atomic Units: No 00:08:33.478 Maximum Single Source Range Length: 128 00:08:33.478 Maximum Copy Length: 128 00:08:33.478 Maximum Source Range Count: 128 00:08:33.478 NGUID/EUI64 Never Reused: No 00:08:33.478 Namespace Write Protected: No 00:08:33.478 Number of LBA Formats: 8 00:08:33.478 Current LBA Format: LBA Format #04 00:08:33.478 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.478 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.478 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.478 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.478 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.478 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.478 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.478 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.478 00:08:33.478 Namespace ID:2 00:08:33.478 Error Recovery Timeout: Unlimited 00:08:33.478 Command Set Identifier: NVM (00h) 00:08:33.478 Deallocate: Supported 00:08:33.478 Deallocated/Unwritten Error: Supported 00:08:33.478 Deallocated Read Value: All 0x00 00:08:33.478 Deallocate in Write Zeroes: Not Supported 00:08:33.479 Deallocated Guard Field: 0xFFFF 00:08:33.479 Flush: Supported 00:08:33.479 Reservation: Not Supported 00:08:33.479 Namespace Sharing Capabilities: Private 00:08:33.479 Size (in LBAs): 1048576 (4GiB) 00:08:33.479 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.479 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.479 Thin Provisioning: Not Supported 00:08:33.479 Per-NS Atomic Units: No 00:08:33.479 Maximum Single Source Range Length: 128 00:08:33.479 Maximum Copy Length: 128 00:08:33.479 Maximum Source Range Count: 128 00:08:33.479 NGUID/EUI64 Never Reused: No 00:08:33.479 Namespace Write Protected: No 00:08:33.479 Number of LBA Formats: 8 00:08:33.479 Current LBA Format: LBA Format #04 00:08:33.479 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.479 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.479 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.479 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.479 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.738 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.738 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.738 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.738 00:08:33.738 Namespace ID:3 00:08:33.738 Error Recovery Timeout: Unlimited 00:08:33.738 Command Set Identifier: NVM (00h) 00:08:33.738 Deallocate: Supported 00:08:33.738 Deallocated/Unwritten Error: Supported 00:08:33.738 Deallocated Read Value: All 0x00 00:08:33.738 Deallocate in Write Zeroes: Not Supported 00:08:33.738 Deallocated Guard Field: 0xFFFF 00:08:33.738 Flush: Supported 00:08:33.738 Reservation: Not Supported 00:08:33.738 Namespace Sharing Capabilities: Private 00:08:33.738 Size (in LBAs): 1048576 (4GiB) 00:08:33.738 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.738 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.738 Thin Provisioning: Not Supported 00:08:33.738 Per-NS Atomic Units: No 00:08:33.738 Maximum Single Source Range Length: 128 00:08:33.738 Maximum Copy Length: 128 00:08:33.738 Maximum Source Range Count: 128 00:08:33.738 NGUID/EUI64 Never Reused: No 00:08:33.738 Namespace Write Protected: No 00:08:33.738 Number of LBA Formats: 8 00:08:33.738 Current LBA Format: LBA Format #04 00:08:33.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.738 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.738 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.738 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.738 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.738 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.738 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.738 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.738 00:08:33.738 14:09:34 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.738 14:09:34 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:08:33.739 ===================================================== 00:08:33.739 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:33.739 ===================================================== 00:08:33.739 Controller Capabilities/Features 00:08:33.739 ================================ 00:08:33.739 Vendor ID: 1b36 00:08:33.739 Subsystem Vendor ID: 1af4 00:08:33.739 Serial Number: 12343 00:08:33.739 Model Number: QEMU NVMe Ctrl 00:08:33.739 Firmware Version: 8.0.0 00:08:33.739 Recommended Arb Burst: 6 00:08:33.739 IEEE OUI Identifier: 00 54 52 00:08:33.739 Multi-path I/O 00:08:33.739 May have multiple subsystem ports: No 00:08:33.739 May have multiple controllers: Yes 00:08:33.739 Associated with SR-IOV VF: No 00:08:33.739 Max Data Transfer Size: 524288 00:08:33.739 Max Number of Namespaces: 256 00:08:33.739 Max Number of I/O Queues: 64 00:08:33.739 NVMe Specification Version (VS): 1.4 00:08:33.739 NVMe Specification Version (Identify): 1.4 00:08:33.739 Maximum Queue Entries: 2048 00:08:33.739 Contiguous Queues Required: Yes 00:08:33.739 Arbitration Mechanisms Supported 00:08:33.739 Weighted Round Robin: Not Supported 00:08:33.739 Vendor Specific: Not Supported 00:08:33.739 Reset Timeout: 7500 ms 00:08:33.739 Doorbell Stride: 4 bytes 00:08:33.739 NVM Subsystem Reset: Not Supported 00:08:33.739 Command Sets Supported 00:08:33.739 NVM Command Set: Supported 00:08:33.739 Boot Partition: Not Supported 00:08:33.739 Memory Page Size Minimum: 4096 bytes 00:08:33.739 Memory Page Size Maximum: 65536 bytes 00:08:33.739 Persistent Memory Region: Not Supported 00:08:33.739 Optional Asynchronous Events Supported 00:08:33.739 Namespace Attribute Notices: Supported 00:08:33.739 Firmware Activation Notices: Not Supported 00:08:33.739 ANA Change Notices: Not Supported 00:08:33.739 PLE Aggregate Log Change Notices: Not Supported 00:08:33.739 LBA Status Info Alert Notices: Not Supported 00:08:33.739 EGE Aggregate Log Change Notices: Not Supported 00:08:33.739 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.739 Zone Descriptor Change Notices: Not Supported 00:08:33.739 Discovery Log Change Notices: Not Supported 00:08:33.739 Controller Attributes 00:08:33.739 128-bit Host Identifier: Not Supported 00:08:33.739 Non-Operational Permissive Mode: Not Supported 00:08:33.739 NVM Sets: Not Supported 00:08:33.739 Read Recovery Levels: Not Supported 00:08:33.739 Endurance Groups: Supported 00:08:33.739 Predictable Latency Mode: Not Supported 00:08:33.739 Traffic Based Keep ALive: Not Supported 00:08:33.739 Namespace Granularity: Not Supported 00:08:33.739 SQ Associations: Not Supported 00:08:33.739 UUID List: Not Supported 00:08:33.739 Multi-Domain Subsystem: Not Supported 00:08:33.739 Fixed Capacity Management: Not Supported 00:08:33.739 Variable Capacity Management: Not Supported 00:08:33.739 Delete Endurance Group: Not Supported 00:08:33.739 Delete NVM Set: Not Supported 00:08:33.739 Extended LBA Formats Supported: Supported 00:08:33.739 Flexible Data Placement Supported: Supported 00:08:33.739 00:08:33.739 Controller Memory Buffer Support 00:08:33.739 ================================ 00:08:33.739 Supported: No 00:08:33.739 00:08:33.739 Persistent Memory Region Support 00:08:33.739 ================================ 00:08:33.739 Supported: No 00:08:33.739 00:08:33.739 Admin Command Set Attributes 00:08:33.739 ============================ 00:08:33.739 Security Send/Receive: Not Supported 00:08:33.739 Format NVM: Supported 00:08:33.739 Firmware Activate/Download: Not Supported 00:08:33.739 Namespace Management: Supported 00:08:33.739 Device Self-Test: Not Supported 00:08:33.739 Directives: Supported 00:08:33.739 NVMe-MI: Not Supported 00:08:33.739 Virtualization Management: Not Supported 00:08:33.739 Doorbell Buffer Config: Supported 00:08:33.739 Get LBA Status Capability: Not Supported 00:08:33.739 Command & Feature Lockdown Capability: Not Supported 00:08:33.739 Abort Command Limit: 4 00:08:33.739 Async Event Request Limit: 4 00:08:33.739 Number of Firmware Slots: N/A 00:08:33.739 Firmware Slot 1 Read-Only: N/A 00:08:33.739 Firmware Activation Without Reset: N/A 00:08:33.739 Multiple Update Detection Support: N/A 00:08:33.739 Firmware Update Granularity: No Information Provided 00:08:33.739 Per-Namespace SMART Log: Yes 00:08:33.739 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.739 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:33.739 Command Effects Log Page: Supported 00:08:33.739 Get Log Page Extended Data: Supported 00:08:33.739 Telemetry Log Pages: Not Supported 00:08:33.739 Persistent Event Log Pages: Not Supported 00:08:33.739 Supported Log Pages Log Page: May Support 00:08:33.739 Commands Supported & Effects Log Page: Not Supported 00:08:33.739 Feature Identifiers & Effects Log Page:May Support 00:08:33.739 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.739 Data Area 4 for Telemetry Log: Not Supported 00:08:33.739 Error Log Page Entries Supported: 1 00:08:33.739 Keep Alive: Not Supported 00:08:33.739 00:08:33.739 NVM Command Set Attributes 00:08:33.739 ========================== 00:08:33.739 Submission Queue Entry Size 00:08:33.739 Max: 64 00:08:33.739 Min: 64 00:08:33.739 Completion Queue Entry Size 00:08:33.739 Max: 16 00:08:33.739 Min: 16 00:08:33.739 Number of Namespaces: 256 00:08:33.739 Compare Command: Supported 00:08:33.739 Write Uncorrectable Command: Not Supported 00:08:33.739 Dataset Management Command: Supported 00:08:33.739 Write Zeroes Command: Supported 00:08:33.739 Set Features Save Field: Supported 00:08:33.739 Reservations: Not Supported 00:08:33.739 Timestamp: Supported 00:08:33.739 Copy: Supported 00:08:33.739 Volatile Write Cache: Present 00:08:33.739 Atomic Write Unit (Normal): 1 00:08:33.739 Atomic Write Unit (PFail): 1 00:08:33.739 Atomic Compare & Write Unit: 1 00:08:33.739 Fused Compare & Write: Not Supported 00:08:33.739 Scatter-Gather List 00:08:33.739 SGL Command Set: Supported 00:08:33.739 SGL Keyed: Not Supported 00:08:33.739 SGL Bit Bucket Descriptor: Not Supported 00:08:33.739 SGL Metadata Pointer: Not Supported 00:08:33.739 Oversized SGL: Not Supported 00:08:33.739 SGL Metadata Address: Not Supported 00:08:33.739 SGL Offset: Not Supported 00:08:33.739 Transport SGL Data Block: Not Supported 00:08:33.739 Replay Protected Memory Block: Not Supported 00:08:33.739 00:08:33.739 Firmware Slot Information 00:08:33.739 ========================= 00:08:33.739 Active slot: 1 00:08:33.739 Slot 1 Firmware Revision: 1.0 00:08:33.739 00:08:33.739 00:08:33.739 Commands Supported and Effects 00:08:33.739 ============================== 00:08:33.739 Admin Commands 00:08:33.739 -------------- 00:08:33.739 Delete I/O Submission Queue (00h): Supported 00:08:33.739 Create I/O Submission Queue (01h): Supported 00:08:33.739 Get Log Page (02h): Supported 00:08:33.739 Delete I/O Completion Queue (04h): Supported 00:08:33.739 Create I/O Completion Queue (05h): Supported 00:08:33.739 Identify (06h): Supported 00:08:33.739 Abort (08h): Supported 00:08:33.739 Set Features (09h): Supported 00:08:33.739 Get Features (0Ah): Supported 00:08:33.739 Asynchronous Event Request (0Ch): Supported 00:08:33.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.739 Directive Send (19h): Supported 00:08:33.739 Directive Receive (1Ah): Supported 00:08:33.739 Virtualization Management (1Ch): Supported 00:08:33.739 Doorbell Buffer Config (7Ch): Supported 00:08:33.739 Format NVM (80h): Supported LBA-Change 00:08:33.739 I/O Commands 00:08:33.739 ------------ 00:08:33.739 Flush (00h): Supported LBA-Change 00:08:33.739 Write (01h): Supported LBA-Change 00:08:33.739 Read (02h): Supported 00:08:33.739 Compare (05h): Supported 00:08:33.739 Write Zeroes (08h): Supported LBA-Change 00:08:33.740 Dataset Management (09h): Supported LBA-Change 00:08:33.740 Unknown (0Ch): Supported 00:08:33.740 Unknown (12h): Supported 00:08:33.740 Copy (19h): Supported LBA-Change 00:08:33.740 Unknown (1Dh): Supported LBA-Change 00:08:33.740 00:08:33.740 Error Log 00:08:33.740 ========= 00:08:33.740 00:08:33.740 Arbitration 00:08:33.740 =========== 00:08:33.740 Arbitration Burst: no limit 00:08:33.740 00:08:33.740 Power Management 00:08:33.740 ================ 00:08:33.740 Number of Power States: 1 00:08:33.740 Current Power State: Power State #0 00:08:33.740 Power State #0: 00:08:33.740 Max Power: 25.00 W 00:08:33.740 Non-Operational State: Operational 00:08:33.740 Entry Latency: 16 microseconds 00:08:33.740 Exit Latency: 4 microseconds 00:08:33.740 Relative Read Throughput: 0 00:08:33.740 Relative Read Latency: 0 00:08:33.740 Relative Write Throughput: 0 00:08:33.740 Relative Write Latency: 0 00:08:33.740 Idle Power: Not Reported 00:08:33.740 Active Power: Not Reported 00:08:33.740 Non-Operational Permissive Mode: Not Supported 00:08:33.740 00:08:33.740 Health Information 00:08:33.740 ================== 00:08:33.740 Critical Warnings: 00:08:33.740 Available Spare Space: OK 00:08:33.740 Temperature: OK 00:08:33.740 Device Reliability: OK 00:08:33.740 Read Only: No 00:08:33.740 Volatile Memory Backup: OK 00:08:33.740 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.740 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.740 Available Spare: 0% 00:08:33.740 Available Spare Threshold: 0% 00:08:33.740 Life Percentage Used: 0% 00:08:33.740 Data Units Read: 1440 00:08:33.740 Data Units Written: 667 00:08:33.740 Host Read Commands: 59128 00:08:33.740 Host Write Commands: 28978 00:08:33.740 Controller Busy Time: 0 minutes 00:08:33.740 Power Cycles: 0 00:08:33.740 Power On Hours: 0 hours 00:08:33.740 Unsafe Shutdowns: 0 00:08:33.740 Unrecoverable Media Errors: 0 00:08:33.740 Lifetime Error Log Entries: 0 00:08:33.740 Warning Temperature Time: 0 minutes 00:08:33.740 Critical Temperature Time: 0 minutes 00:08:33.740 00:08:33.740 Number of Queues 00:08:33.740 ================ 00:08:33.740 Number of I/O Submission Queues: 64 00:08:33.740 Number of I/O Completion Queues: 64 00:08:33.740 00:08:33.740 ZNS Specific Controller Data 00:08:33.740 ============================ 00:08:33.740 Zone Append Size Limit: 0 00:08:33.740 00:08:33.740 00:08:33.740 Active Namespaces 00:08:33.740 ================= 00:08:33.740 Namespace ID:1 00:08:33.740 Error Recovery Timeout: Unlimited 00:08:33.740 Command Set Identifier: NVM (00h) 00:08:33.740 Deallocate: Supported 00:08:33.740 Deallocated/Unwritten Error: Supported 00:08:33.740 Deallocated Read Value: All 0x00 00:08:33.740 Deallocate in Write Zeroes: Not Supported 00:08:33.740 Deallocated Guard Field: 0xFFFF 00:08:33.740 Flush: Supported 00:08:33.740 Reservation: Not Supported 00:08:33.740 Namespace Sharing Capabilities: Multiple Controllers 00:08:33.740 Size (in LBAs): 262144 (1GiB) 00:08:33.740 Capacity (in LBAs): 262144 (1GiB) 00:08:33.740 Utilization (in LBAs): 262144 (1GiB) 00:08:33.740 Thin Provisioning: Not Supported 00:08:33.740 Per-NS Atomic Units: No 00:08:33.740 Maximum Single Source Range Length: 128 00:08:33.740 Maximum Copy Length: 128 00:08:33.740 Maximum Source Range Count: 128 00:08:33.740 NGUID/EUI64 Never Reused: No 00:08:33.740 Namespace Write Protected: No 00:08:33.740 Endurance group ID: 1 00:08:33.740 Number of LBA Formats: 8 00:08:33.740 Current LBA Format: LBA Format #04 00:08:33.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.740 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.740 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.740 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.740 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.740 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.740 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.740 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.740 00:08:33.740 Get Feature FDP: 00:08:33.740 ================ 00:08:33.740 Enabled: Yes 00:08:33.740 FDP configuration index: 0 00:08:33.740 00:08:33.740 FDP configurations log page 00:08:33.740 =========================== 00:08:33.740 Number of FDP configurations: 1 00:08:33.740 Version: 0 00:08:33.740 Size: 112 00:08:33.740 FDP Configuration Descriptor: 0 00:08:33.740 Descriptor Size: 96 00:08:33.740 Reclaim Group Identifier format: 2 00:08:33.740 FDP Volatile Write Cache: Not Present 00:08:33.740 FDP Configuration: Valid 00:08:33.740 Vendor Specific Size: 0 00:08:33.740 Number of Reclaim Groups: 2 00:08:33.740 Number of Recalim Unit Handles: 8 00:08:33.740 Max Placement Identifiers: 128 00:08:33.740 Number of Namespaces Suppprted: 256 00:08:33.740 Reclaim unit Nominal Size: 6000000 bytes 00:08:33.740 Estimated Reclaim Unit Time Limit: Not Reported 00:08:33.740 RUH Desc #000: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #001: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #002: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #003: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #004: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #005: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #006: RUH Type: Initially Isolated 00:08:33.740 RUH Desc #007: RUH Type: Initially Isolated 00:08:33.740 00:08:33.740 FDP reclaim unit handle usage log page 00:08:33.740 ====================================== 00:08:33.740 Number of Reclaim Unit Handles: 8 00:08:33.740 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:33.740 RUH Usage Desc #001: RUH Attributes: Unused 00:08:33.740 RUH Usage Desc #002: RUH Attributes: Unused 00:08:33.740 RUH Usage Desc #003: RUH Attributes: Unused 00:08:33.740 RUH Usage Desc #004: RUH Attributes: Unused 00:08:33.740 RUH Usage Desc #005: RUH Attributes: Unused 00:08:33.740 RUH Usage Desc #006: RUH Attributes: Unused 00:08:33.740 RUH Usage Desc #007: RUH Attributes: Unused 00:08:33.740 00:08:33.740 FDP statistics log page 00:08:33.740 ======================= 00:08:33.740 Host bytes with metadata written: 445685760 00:08:33.740 Media bytes with metadata written: 445755392 00:08:33.740 Media bytes erased: 0 00:08:33.740 00:08:33.740 FDP events log page 00:08:33.740 =================== 00:08:33.740 Number of FDP events: 0 00:08:33.740 00:08:33.740 00:08:33.740 real 0m1.105s 00:08:33.740 user 0m0.380s 00:08:33.740 sys 0m0.498s 00:08:33.740 14:09:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.740 14:09:35 -- common/autotest_common.sh@10 -- # set +x 00:08:33.740 ************************************ 00:08:33.740 END TEST nvme_identify 00:08:33.740 ************************************ 00:08:33.998 14:09:35 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:33.998 14:09:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.998 14:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.998 14:09:35 -- common/autotest_common.sh@10 -- # set +x 00:08:33.998 ************************************ 00:08:33.998 START TEST nvme_perf 00:08:33.998 ************************************ 00:08:33.998 14:09:35 -- common/autotest_common.sh@1114 -- # nvme_perf 00:08:33.998 14:09:35 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:35.374 Initializing NVMe Controllers 00:08:35.374 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:35.374 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:35.374 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:35.374 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:35.374 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:08:35.374 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:08:35.374 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:08:35.374 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:08:35.374 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:08:35.374 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:08:35.374 Initialization complete. Launching workers. 00:08:35.374 ======================================================== 00:08:35.374 Latency(us) 00:08:35.374 Device Information : IOPS MiB/s Average min max 00:08:35.375 PCIE (0000:00:09.0) NSID 1 from core 0: 18175.75 213.00 7039.89 5183.26 29629.37 00:08:35.375 PCIE (0000:00:06.0) NSID 1 from core 0: 18175.75 213.00 7033.91 5023.50 28971.62 00:08:35.375 PCIE (0000:00:07.0) NSID 1 from core 0: 18175.75 213.00 7028.98 4929.88 27524.25 00:08:35.375 PCIE (0000:00:08.0) NSID 1 from core 0: 18175.75 213.00 7023.24 5163.84 26965.82 00:08:35.375 PCIE (0000:00:08.0) NSID 2 from core 0: 18175.75 213.00 7017.62 5155.59 25717.88 00:08:35.375 PCIE (0000:00:08.0) NSID 3 from core 0: 18303.74 214.50 6963.03 5204.43 17612.87 00:08:35.375 ======================================================== 00:08:35.375 Total : 109182.47 1279.48 7017.72 4929.88 29629.37 00:08:35.375 00:08:35.375 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:35.375 ================================================================================= 00:08:35.375 1.00000% : 5318.498us 00:08:35.375 10.00000% : 5671.385us 00:08:35.375 25.00000% : 6049.477us 00:08:35.375 50.00000% : 6604.012us 00:08:35.375 75.00000% : 7208.960us 00:08:35.375 90.00000% : 8368.443us 00:08:35.375 95.00000% : 10435.348us 00:08:35.375 98.00000% : 12502.252us 00:08:35.375 99.00000% : 14821.218us 00:08:35.375 99.50000% : 27625.945us 00:08:35.375 99.90000% : 29239.138us 00:08:35.375 99.99000% : 29642.437us 00:08:35.375 99.99900% : 29642.437us 00:08:35.375 99.99990% : 29642.437us 00:08:35.375 99.99999% : 29642.437us 00:08:35.375 00:08:35.375 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:35.375 ================================================================================= 00:08:35.375 1.00000% : 5167.262us 00:08:35.375 10.00000% : 5570.560us 00:08:35.375 25.00000% : 5999.065us 00:08:35.375 50.00000% : 6654.425us 00:08:35.375 75.00000% : 7259.372us 00:08:35.375 90.00000% : 8570.092us 00:08:35.375 95.00000% : 10485.760us 00:08:35.375 98.00000% : 12048.542us 00:08:35.375 99.00000% : 14317.095us 00:08:35.375 99.50000% : 26819.348us 00:08:35.375 99.90000% : 28634.191us 00:08:35.375 99.99000% : 29037.489us 00:08:35.375 99.99900% : 29037.489us 00:08:35.375 99.99990% : 29037.489us 00:08:35.375 99.99999% : 29037.489us 00:08:35.375 00:08:35.375 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:35.375 ================================================================================= 00:08:35.375 1.00000% : 5293.292us 00:08:35.375 10.00000% : 5646.178us 00:08:35.375 25.00000% : 6049.477us 00:08:35.375 50.00000% : 6604.012us 00:08:35.375 75.00000% : 7208.960us 00:08:35.375 90.00000% : 8620.505us 00:08:35.375 95.00000% : 10284.111us 00:08:35.375 98.00000% : 11544.418us 00:08:35.375 99.00000% : 14821.218us 00:08:35.375 99.50000% : 25407.803us 00:08:35.375 99.90000% : 27222.646us 00:08:35.375 99.99000% : 27625.945us 00:08:35.375 99.99900% : 27625.945us 00:08:35.375 99.99990% : 27625.945us 00:08:35.375 99.99999% : 27625.945us 00:08:35.375 00:08:35.375 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:35.375 ================================================================================= 00:08:35.375 1.00000% : 5343.705us 00:08:35.375 10.00000% : 5671.385us 00:08:35.375 25.00000% : 6049.477us 00:08:35.375 50.00000% : 6604.012us 00:08:35.375 75.00000% : 7208.960us 00:08:35.375 90.00000% : 8469.268us 00:08:35.375 95.00000% : 10536.172us 00:08:35.375 98.00000% : 12451.840us 00:08:35.375 99.00000% : 14317.095us 00:08:35.375 99.50000% : 24903.680us 00:08:35.375 99.90000% : 26617.698us 00:08:35.375 99.99000% : 27020.997us 00:08:35.375 99.99900% : 27020.997us 00:08:35.375 99.99990% : 27020.997us 00:08:35.375 99.99999% : 27020.997us 00:08:35.375 00:08:35.375 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:35.375 ================================================================================= 00:08:35.375 1.00000% : 5318.498us 00:08:35.375 10.00000% : 5671.385us 00:08:35.375 25.00000% : 6049.477us 00:08:35.375 50.00000% : 6604.012us 00:08:35.375 75.00000% : 7208.960us 00:08:35.375 90.00000% : 8418.855us 00:08:35.375 95.00000% : 10536.172us 00:08:35.375 98.00000% : 13107.200us 00:08:35.375 99.00000% : 14317.095us 00:08:35.375 99.50000% : 23592.960us 00:08:35.375 99.90000% : 25306.978us 00:08:35.375 99.99000% : 25710.277us 00:08:35.375 99.99900% : 25811.102us 00:08:35.375 99.99990% : 25811.102us 00:08:35.375 99.99999% : 25811.102us 00:08:35.375 00:08:35.375 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:35.375 ================================================================================= 00:08:35.375 1.00000% : 5318.498us 00:08:35.375 10.00000% : 5671.385us 00:08:35.375 25.00000% : 6049.477us 00:08:35.375 50.00000% : 6604.012us 00:08:35.375 75.00000% : 7208.960us 00:08:35.375 90.00000% : 8418.855us 00:08:35.375 95.00000% : 10284.111us 00:08:35.375 98.00000% : 13006.375us 00:08:35.375 99.00000% : 14619.569us 00:08:35.375 99.50000% : 15526.991us 00:08:35.375 99.90000% : 17241.009us 00:08:35.375 99.99000% : 17644.308us 00:08:35.375 99.99900% : 17644.308us 00:08:35.375 99.99990% : 17644.308us 00:08:35.375 99.99999% : 17644.308us 00:08:35.375 00:08:35.375 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:35.375 ============================================================================== 00:08:35.375 Range in us Cumulative IO count 00:08:35.375 5167.262 - 5192.468: 0.0110% ( 2) 00:08:35.375 5192.468 - 5217.674: 0.0770% ( 12) 00:08:35.375 5217.674 - 5242.880: 0.1926% ( 21) 00:08:35.375 5242.880 - 5268.086: 0.3081% ( 21) 00:08:35.375 5268.086 - 5293.292: 0.5392% ( 42) 00:08:35.375 5293.292 - 5318.498: 1.0398% ( 91) 00:08:35.375 5318.498 - 5343.705: 1.5570% ( 94) 00:08:35.375 5343.705 - 5368.911: 2.0191% ( 84) 00:08:35.375 5368.911 - 5394.117: 2.5583% ( 98) 00:08:35.375 5394.117 - 5419.323: 3.1745% ( 112) 00:08:35.375 5419.323 - 5444.529: 3.6587% ( 88) 00:08:35.375 5444.529 - 5469.735: 4.2033% ( 99) 00:08:35.375 5469.735 - 5494.942: 4.8305% ( 114) 00:08:35.375 5494.942 - 5520.148: 5.5183% ( 125) 00:08:35.375 5520.148 - 5545.354: 6.3050% ( 143) 00:08:35.375 5545.354 - 5570.560: 7.0973% ( 144) 00:08:35.375 5570.560 - 5595.766: 7.8950% ( 145) 00:08:35.375 5595.766 - 5620.972: 8.6928% ( 145) 00:08:35.375 5620.972 - 5646.178: 9.5621% ( 158) 00:08:35.375 5646.178 - 5671.385: 10.4588% ( 163) 00:08:35.375 5671.385 - 5696.591: 11.3336% ( 159) 00:08:35.375 5696.591 - 5721.797: 12.3404% ( 183) 00:08:35.375 5721.797 - 5747.003: 13.3308% ( 180) 00:08:35.375 5747.003 - 5772.209: 14.2661% ( 170) 00:08:35.375 5772.209 - 5797.415: 15.1904% ( 168) 00:08:35.375 5797.415 - 5822.622: 16.1477% ( 174) 00:08:35.375 5822.622 - 5847.828: 17.1435% ( 181) 00:08:35.375 5847.828 - 5873.034: 18.1393% ( 181) 00:08:35.375 5873.034 - 5898.240: 19.2397% ( 200) 00:08:35.375 5898.240 - 5923.446: 20.3345% ( 199) 00:08:35.375 5923.446 - 5948.652: 21.4349% ( 200) 00:08:35.375 5948.652 - 5973.858: 22.5297% ( 199) 00:08:35.375 5973.858 - 5999.065: 23.6301% ( 200) 00:08:35.375 5999.065 - 6024.271: 24.7139% ( 197) 00:08:35.375 6024.271 - 6049.477: 25.7923% ( 196) 00:08:35.375 6049.477 - 6074.683: 26.9146% ( 204) 00:08:35.375 6074.683 - 6099.889: 28.0150% ( 200) 00:08:35.375 6099.889 - 6125.095: 29.1593% ( 208) 00:08:35.375 6125.095 - 6150.302: 30.2927% ( 206) 00:08:35.375 6150.302 - 6175.508: 31.4261% ( 206) 00:08:35.375 6175.508 - 6200.714: 32.5374% ( 202) 00:08:35.375 6200.714 - 6225.920: 33.6543% ( 203) 00:08:35.375 6225.920 - 6251.126: 34.7931% ( 207) 00:08:35.375 6251.126 - 6276.332: 35.8770% ( 197) 00:08:35.375 6276.332 - 6301.538: 37.0158% ( 207) 00:08:35.375 6301.538 - 6326.745: 38.1107% ( 199) 00:08:35.375 6326.745 - 6351.951: 39.2606% ( 209) 00:08:35.375 6351.951 - 6377.157: 40.3774% ( 203) 00:08:35.375 6377.157 - 6402.363: 41.5328% ( 210) 00:08:35.375 6402.363 - 6427.569: 42.6662% ( 206) 00:08:35.375 6427.569 - 6452.775: 43.8435% ( 214) 00:08:35.375 6452.775 - 6503.188: 46.2093% ( 430) 00:08:35.375 6503.188 - 6553.600: 48.5035% ( 417) 00:08:35.375 6553.600 - 6604.012: 50.8088% ( 419) 00:08:35.375 6604.012 - 6654.425: 53.1580% ( 427) 00:08:35.375 6654.425 - 6704.837: 55.5678% ( 438) 00:08:35.375 6704.837 - 6755.249: 57.9390% ( 431) 00:08:35.375 6755.249 - 6805.662: 60.3103% ( 431) 00:08:35.375 6805.662 - 6856.074: 62.7091% ( 436) 00:08:35.375 6856.074 - 6906.486: 65.0253% ( 421) 00:08:35.375 6906.486 - 6956.898: 67.2040% ( 396) 00:08:35.375 6956.898 - 7007.311: 69.1901% ( 361) 00:08:35.375 7007.311 - 7057.723: 70.9562% ( 321) 00:08:35.375 7057.723 - 7108.135: 72.6067% ( 300) 00:08:35.375 7108.135 - 7158.548: 74.1472% ( 280) 00:08:35.375 7158.548 - 7208.960: 75.5062% ( 247) 00:08:35.375 7208.960 - 7259.372: 76.7606% ( 228) 00:08:35.375 7259.372 - 7309.785: 77.9269% ( 212) 00:08:35.375 7309.785 - 7360.197: 78.9888% ( 193) 00:08:35.375 7360.197 - 7410.609: 79.9736% ( 179) 00:08:35.375 7410.609 - 7461.022: 80.9089% ( 170) 00:08:35.375 7461.022 - 7511.434: 81.8057% ( 163) 00:08:35.375 7511.434 - 7561.846: 82.5319% ( 132) 00:08:35.375 7561.846 - 7612.258: 83.1591% ( 114) 00:08:35.375 7612.258 - 7662.671: 83.7423% ( 106) 00:08:35.375 7662.671 - 7713.083: 84.2980% ( 101) 00:08:35.375 7713.083 - 7763.495: 84.7986% ( 91) 00:08:35.375 7763.495 - 7813.908: 85.2828% ( 88) 00:08:35.375 7813.908 - 7864.320: 85.7449% ( 84) 00:08:35.375 7864.320 - 7914.732: 86.1961% ( 82) 00:08:35.375 7914.732 - 7965.145: 86.6802% ( 88) 00:08:35.375 7965.145 - 8015.557: 87.1699% ( 89) 00:08:35.375 8015.557 - 8065.969: 87.6375% ( 85) 00:08:35.375 8065.969 - 8116.382: 88.0557% ( 76) 00:08:35.375 8116.382 - 8166.794: 88.4848% ( 78) 00:08:35.375 8166.794 - 8217.206: 88.9250% ( 80) 00:08:35.376 8217.206 - 8267.618: 89.3376% ( 75) 00:08:35.376 8267.618 - 8318.031: 89.7612% ( 77) 00:08:35.376 8318.031 - 8368.443: 90.1463% ( 70) 00:08:35.376 8368.443 - 8418.855: 90.5370% ( 71) 00:08:35.376 8418.855 - 8469.268: 90.9166% ( 69) 00:08:35.376 8469.268 - 8519.680: 91.2632% ( 63) 00:08:35.376 8519.680 - 8570.092: 91.5603% ( 54) 00:08:35.376 8570.092 - 8620.505: 91.7804% ( 40) 00:08:35.376 8620.505 - 8670.917: 91.9839% ( 37) 00:08:35.376 8670.917 - 8721.329: 92.1765% ( 35) 00:08:35.376 8721.329 - 8771.742: 92.3415% ( 30) 00:08:35.376 8771.742 - 8822.154: 92.4516% ( 20) 00:08:35.376 8822.154 - 8872.566: 92.5396% ( 16) 00:08:35.376 8872.566 - 8922.978: 92.6386% ( 18) 00:08:35.376 8922.978 - 8973.391: 92.7047% ( 12) 00:08:35.376 8973.391 - 9023.803: 92.7652% ( 11) 00:08:35.376 9023.803 - 9074.215: 92.8257% ( 11) 00:08:35.376 9074.215 - 9124.628: 92.9027% ( 14) 00:08:35.376 9124.628 - 9175.040: 92.9798% ( 14) 00:08:35.376 9175.040 - 9225.452: 93.0513% ( 13) 00:08:35.376 9225.452 - 9275.865: 93.1393% ( 16) 00:08:35.376 9275.865 - 9326.277: 93.2273% ( 16) 00:08:35.376 9326.277 - 9376.689: 93.3099% ( 15) 00:08:35.376 9376.689 - 9427.102: 93.3869% ( 14) 00:08:35.376 9427.102 - 9477.514: 93.4639% ( 14) 00:08:35.376 9477.514 - 9527.926: 93.5244% ( 11) 00:08:35.376 9527.926 - 9578.338: 93.6015% ( 14) 00:08:35.376 9578.338 - 9628.751: 93.6785% ( 14) 00:08:35.376 9628.751 - 9679.163: 93.7665% ( 16) 00:08:35.376 9679.163 - 9729.575: 93.8600% ( 17) 00:08:35.376 9729.575 - 9779.988: 93.9261% ( 12) 00:08:35.376 9779.988 - 9830.400: 93.9976% ( 13) 00:08:35.376 9830.400 - 9880.812: 94.0746% ( 14) 00:08:35.376 9880.812 - 9931.225: 94.1516% ( 14) 00:08:35.376 9931.225 - 9981.637: 94.2232% ( 13) 00:08:35.376 9981.637 - 10032.049: 94.3057% ( 15) 00:08:35.376 10032.049 - 10082.462: 94.3937% ( 16) 00:08:35.376 10082.462 - 10132.874: 94.4817% ( 16) 00:08:35.376 10132.874 - 10183.286: 94.5698% ( 16) 00:08:35.376 10183.286 - 10233.698: 94.6578% ( 16) 00:08:35.376 10233.698 - 10284.111: 94.7403% ( 15) 00:08:35.376 10284.111 - 10334.523: 94.8338% ( 17) 00:08:35.376 10334.523 - 10384.935: 94.9384% ( 19) 00:08:35.376 10384.935 - 10435.348: 95.0319% ( 17) 00:08:35.376 10435.348 - 10485.760: 95.1089% ( 14) 00:08:35.376 10485.760 - 10536.172: 95.1860% ( 14) 00:08:35.376 10536.172 - 10586.585: 95.2630% ( 14) 00:08:35.376 10586.585 - 10636.997: 95.3345% ( 13) 00:08:35.376 10636.997 - 10687.409: 95.4115% ( 14) 00:08:35.376 10687.409 - 10737.822: 95.4831% ( 13) 00:08:35.376 10737.822 - 10788.234: 95.5546% ( 13) 00:08:35.376 10788.234 - 10838.646: 95.6316% ( 14) 00:08:35.376 10838.646 - 10889.058: 95.7031% ( 13) 00:08:35.376 10889.058 - 10939.471: 95.7691% ( 12) 00:08:35.376 10939.471 - 10989.883: 95.8462% ( 14) 00:08:35.376 10989.883 - 11040.295: 95.9232% ( 14) 00:08:35.376 11040.295 - 11090.708: 96.0167% ( 17) 00:08:35.376 11090.708 - 11141.120: 96.1103% ( 17) 00:08:35.376 11141.120 - 11191.532: 96.1983% ( 16) 00:08:35.376 11191.532 - 11241.945: 96.3193% ( 22) 00:08:35.376 11241.945 - 11292.357: 96.4349% ( 21) 00:08:35.376 11292.357 - 11342.769: 96.5284% ( 17) 00:08:35.376 11342.769 - 11393.182: 96.6219% ( 17) 00:08:35.376 11393.182 - 11443.594: 96.7210% ( 18) 00:08:35.376 11443.594 - 11494.006: 96.8035% ( 15) 00:08:35.376 11494.006 - 11544.418: 96.8860% ( 15) 00:08:35.376 11544.418 - 11594.831: 96.9630% ( 14) 00:08:35.376 11594.831 - 11645.243: 97.0401% ( 14) 00:08:35.376 11645.243 - 11695.655: 97.1116% ( 13) 00:08:35.376 11695.655 - 11746.068: 97.1666% ( 10) 00:08:35.376 11746.068 - 11796.480: 97.2326% ( 12) 00:08:35.376 11796.480 - 11846.892: 97.2876% ( 10) 00:08:35.376 11846.892 - 11897.305: 97.3426% ( 10) 00:08:35.376 11897.305 - 11947.717: 97.4032% ( 11) 00:08:35.376 11947.717 - 11998.129: 97.4582% ( 10) 00:08:35.376 11998.129 - 12048.542: 97.5187% ( 11) 00:08:35.376 12048.542 - 12098.954: 97.5737% ( 10) 00:08:35.376 12098.954 - 12149.366: 97.6342% ( 11) 00:08:35.376 12149.366 - 12199.778: 97.6948% ( 11) 00:08:35.376 12199.778 - 12250.191: 97.7498% ( 10) 00:08:35.376 12250.191 - 12300.603: 97.7938% ( 8) 00:08:35.376 12300.603 - 12351.015: 97.8543% ( 11) 00:08:35.376 12351.015 - 12401.428: 97.9093% ( 10) 00:08:35.376 12401.428 - 12451.840: 97.9588% ( 9) 00:08:35.376 12451.840 - 12502.252: 98.0029% ( 8) 00:08:35.376 12502.252 - 12552.665: 98.0524% ( 9) 00:08:35.376 12552.665 - 12603.077: 98.1019% ( 9) 00:08:35.376 12603.077 - 12653.489: 98.1514% ( 9) 00:08:35.376 12653.489 - 12703.902: 98.1954% ( 8) 00:08:35.376 12703.902 - 12754.314: 98.2394% ( 8) 00:08:35.376 12754.314 - 12804.726: 98.2945% ( 10) 00:08:35.376 12804.726 - 12855.138: 98.3385% ( 8) 00:08:35.376 12855.138 - 12905.551: 98.3880% ( 9) 00:08:35.376 12905.551 - 13006.375: 98.4760% ( 16) 00:08:35.376 13006.375 - 13107.200: 98.5145% ( 7) 00:08:35.376 13107.200 - 13208.025: 98.5365% ( 4) 00:08:35.376 13208.025 - 13308.849: 98.5585% ( 4) 00:08:35.376 13308.849 - 13409.674: 98.5805% ( 4) 00:08:35.376 13409.674 - 13510.498: 98.5915% ( 2) 00:08:35.376 13812.972 - 13913.797: 98.6466% ( 10) 00:08:35.376 13913.797 - 14014.622: 98.6686% ( 4) 00:08:35.376 14014.622 - 14115.446: 98.7126% ( 8) 00:08:35.376 14115.446 - 14216.271: 98.7566% ( 8) 00:08:35.376 14216.271 - 14317.095: 98.7951% ( 7) 00:08:35.376 14317.095 - 14417.920: 98.8391% ( 8) 00:08:35.376 14417.920 - 14518.745: 98.8831% ( 8) 00:08:35.376 14518.745 - 14619.569: 98.9217% ( 7) 00:08:35.376 14619.569 - 14720.394: 98.9657% ( 8) 00:08:35.376 14720.394 - 14821.218: 99.0097% ( 8) 00:08:35.376 14821.218 - 14922.043: 99.0482% ( 7) 00:08:35.376 14922.043 - 15022.868: 99.0867% ( 7) 00:08:35.376 15022.868 - 15123.692: 99.1307% ( 8) 00:08:35.376 15123.692 - 15224.517: 99.1692% ( 7) 00:08:35.376 15224.517 - 15325.342: 99.2132% ( 8) 00:08:35.376 15325.342 - 15426.166: 99.2573% ( 8) 00:08:35.376 15426.166 - 15526.991: 99.2958% ( 7) 00:08:35.376 26617.698 - 26819.348: 99.3123% ( 3) 00:08:35.376 26819.348 - 27020.997: 99.3618% ( 9) 00:08:35.376 27020.997 - 27222.646: 99.4168% ( 10) 00:08:35.376 27222.646 - 27424.295: 99.4608% ( 8) 00:08:35.376 27424.295 - 27625.945: 99.5103% ( 9) 00:08:35.376 27625.945 - 27827.594: 99.5599% ( 9) 00:08:35.376 27827.594 - 28029.243: 99.6094% ( 9) 00:08:35.376 28029.243 - 28230.892: 99.6534% ( 8) 00:08:35.376 28230.892 - 28432.542: 99.7029% ( 9) 00:08:35.376 28432.542 - 28634.191: 99.7524% ( 9) 00:08:35.376 28634.191 - 28835.840: 99.8019% ( 9) 00:08:35.376 28835.840 - 29037.489: 99.8515% ( 9) 00:08:35.376 29037.489 - 29239.138: 99.9065% ( 10) 00:08:35.376 29239.138 - 29440.788: 99.9560% ( 9) 00:08:35.376 29440.788 - 29642.437: 100.0000% ( 8) 00:08:35.376 00:08:35.376 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:35.376 ============================================================================== 00:08:35.376 Range in us Cumulative IO count 00:08:35.376 5016.025 - 5041.231: 0.0385% ( 7) 00:08:35.376 5041.231 - 5066.437: 0.1320% ( 17) 00:08:35.376 5066.437 - 5091.643: 0.2806% ( 27) 00:08:35.376 5091.643 - 5116.849: 0.4897% ( 38) 00:08:35.376 5116.849 - 5142.055: 0.7537% ( 48) 00:08:35.376 5142.055 - 5167.262: 1.1059% ( 64) 00:08:35.376 5167.262 - 5192.468: 1.5460% ( 80) 00:08:35.376 5192.468 - 5217.674: 1.8926% ( 63) 00:08:35.376 5217.674 - 5242.880: 2.3162% ( 77) 00:08:35.376 5242.880 - 5268.086: 2.6794% ( 66) 00:08:35.376 5268.086 - 5293.292: 3.1580% ( 87) 00:08:35.376 5293.292 - 5318.498: 3.6917% ( 97) 00:08:35.376 5318.498 - 5343.705: 4.2309% ( 98) 00:08:35.376 5343.705 - 5368.911: 4.8360% ( 110) 00:08:35.376 5368.911 - 5394.117: 5.5128% ( 123) 00:08:35.376 5394.117 - 5419.323: 6.1950% ( 124) 00:08:35.376 5419.323 - 5444.529: 6.9432% ( 136) 00:08:35.376 5444.529 - 5469.735: 7.5924% ( 118) 00:08:35.376 5469.735 - 5494.942: 8.3792% ( 143) 00:08:35.376 5494.942 - 5520.148: 9.1934% ( 148) 00:08:35.376 5520.148 - 5545.354: 9.8592% ( 121) 00:08:35.376 5545.354 - 5570.560: 10.6349% ( 141) 00:08:35.376 5570.560 - 5595.766: 11.3501% ( 130) 00:08:35.376 5595.766 - 5620.972: 12.1754% ( 150) 00:08:35.376 5620.972 - 5646.178: 12.9456% ( 140) 00:08:35.376 5646.178 - 5671.385: 13.7599% ( 148) 00:08:35.376 5671.385 - 5696.591: 14.6017% ( 153) 00:08:35.376 5696.591 - 5721.797: 15.4104% ( 147) 00:08:35.376 5721.797 - 5747.003: 16.3457% ( 170) 00:08:35.376 5747.003 - 5772.209: 17.3140% ( 176) 00:08:35.376 5772.209 - 5797.415: 18.1833% ( 158) 00:08:35.376 5797.415 - 5822.622: 19.1626% ( 178) 00:08:35.376 5822.622 - 5847.828: 20.0759% ( 166) 00:08:35.376 5847.828 - 5873.034: 21.0057% ( 169) 00:08:35.376 5873.034 - 5898.240: 21.9795% ( 177) 00:08:35.376 5898.240 - 5923.446: 22.9478% ( 176) 00:08:35.376 5923.446 - 5948.652: 23.8666% ( 167) 00:08:35.376 5948.652 - 5973.858: 24.8680% ( 182) 00:08:35.376 5973.858 - 5999.065: 25.8528% ( 179) 00:08:35.376 5999.065 - 6024.271: 26.7881% ( 170) 00:08:35.376 6024.271 - 6049.477: 27.7014% ( 166) 00:08:35.376 6049.477 - 6074.683: 28.6367% ( 170) 00:08:35.376 6074.683 - 6099.889: 29.6435% ( 183) 00:08:35.376 6099.889 - 6125.095: 30.5843% ( 171) 00:08:35.376 6125.095 - 6150.302: 31.5251% ( 171) 00:08:35.376 6150.302 - 6175.508: 32.5484% ( 186) 00:08:35.376 6175.508 - 6200.714: 33.4672% ( 167) 00:08:35.376 6200.714 - 6225.920: 34.4245% ( 174) 00:08:35.376 6225.920 - 6251.126: 35.3928% ( 176) 00:08:35.376 6251.126 - 6276.332: 36.3556% ( 175) 00:08:35.376 6276.332 - 6301.538: 37.3680% ( 184) 00:08:35.376 6301.538 - 6326.745: 38.4078% ( 189) 00:08:35.377 6326.745 - 6351.951: 39.3981% ( 180) 00:08:35.377 6351.951 - 6377.157: 40.4214% ( 186) 00:08:35.377 6377.157 - 6402.363: 41.3622% ( 171) 00:08:35.377 6402.363 - 6427.569: 42.3801% ( 185) 00:08:35.377 6427.569 - 6452.775: 43.4254% ( 190) 00:08:35.377 6452.775 - 6503.188: 45.4280% ( 364) 00:08:35.377 6503.188 - 6553.600: 47.4472% ( 367) 00:08:35.377 6553.600 - 6604.012: 49.5103% ( 375) 00:08:35.377 6604.012 - 6654.425: 51.5130% ( 364) 00:08:35.377 6654.425 - 6704.837: 53.6587% ( 390) 00:08:35.377 6704.837 - 6755.249: 55.7768% ( 385) 00:08:35.377 6755.249 - 6805.662: 57.8895% ( 384) 00:08:35.377 6805.662 - 6856.074: 60.0242% ( 388) 00:08:35.377 6856.074 - 6906.486: 62.2414% ( 403) 00:08:35.377 6906.486 - 6956.898: 64.4531% ( 402) 00:08:35.377 6956.898 - 7007.311: 66.6538% ( 400) 00:08:35.377 7007.311 - 7057.723: 68.8160% ( 393) 00:08:35.377 7057.723 - 7108.135: 70.8132% ( 363) 00:08:35.377 7108.135 - 7158.548: 72.4912% ( 305) 00:08:35.377 7158.548 - 7208.960: 74.0372% ( 281) 00:08:35.377 7208.960 - 7259.372: 75.4676% ( 260) 00:08:35.377 7259.372 - 7309.785: 76.6670% ( 218) 00:08:35.377 7309.785 - 7360.197: 77.7674% ( 200) 00:08:35.377 7360.197 - 7410.609: 78.6697% ( 164) 00:08:35.377 7410.609 - 7461.022: 79.5335% ( 157) 00:08:35.377 7461.022 - 7511.434: 80.3752% ( 153) 00:08:35.377 7511.434 - 7561.846: 81.1345% ( 138) 00:08:35.377 7561.846 - 7612.258: 81.9267% ( 144) 00:08:35.377 7612.258 - 7662.671: 82.6585% ( 133) 00:08:35.377 7662.671 - 7713.083: 83.2526% ( 108) 00:08:35.377 7713.083 - 7763.495: 83.7423% ( 89) 00:08:35.377 7763.495 - 7813.908: 84.2044% ( 84) 00:08:35.377 7813.908 - 7864.320: 84.6996% ( 90) 00:08:35.377 7864.320 - 7914.732: 85.1177% ( 76) 00:08:35.377 7914.732 - 7965.145: 85.5359% ( 76) 00:08:35.377 7965.145 - 8015.557: 85.9705% ( 79) 00:08:35.377 8015.557 - 8065.969: 86.3611% ( 71) 00:08:35.377 8065.969 - 8116.382: 86.7738% ( 75) 00:08:35.377 8116.382 - 8166.794: 87.1754% ( 73) 00:08:35.377 8166.794 - 8217.206: 87.5550% ( 69) 00:08:35.377 8217.206 - 8267.618: 87.9842% ( 78) 00:08:35.377 8267.618 - 8318.031: 88.3308% ( 63) 00:08:35.377 8318.031 - 8368.443: 88.7269% ( 72) 00:08:35.377 8368.443 - 8418.855: 89.0735% ( 63) 00:08:35.377 8418.855 - 8469.268: 89.4201% ( 63) 00:08:35.377 8469.268 - 8519.680: 89.7447% ( 59) 00:08:35.377 8519.680 - 8570.092: 90.1684% ( 77) 00:08:35.377 8570.092 - 8620.505: 90.4544% ( 52) 00:08:35.377 8620.505 - 8670.917: 90.7956% ( 62) 00:08:35.377 8670.917 - 8721.329: 91.0596% ( 48) 00:08:35.377 8721.329 - 8771.742: 91.2962% ( 43) 00:08:35.377 8771.742 - 8822.154: 91.5163% ( 40) 00:08:35.377 8822.154 - 8872.566: 91.6868% ( 31) 00:08:35.377 8872.566 - 8922.978: 91.8519% ( 30) 00:08:35.377 8922.978 - 8973.391: 92.0059% ( 28) 00:08:35.377 8973.391 - 9023.803: 92.1215% ( 21) 00:08:35.377 9023.803 - 9074.215: 92.2645% ( 26) 00:08:35.377 9074.215 - 9124.628: 92.4076% ( 26) 00:08:35.377 9124.628 - 9175.040: 92.5121% ( 19) 00:08:35.377 9175.040 - 9225.452: 92.6276% ( 21) 00:08:35.377 9225.452 - 9275.865: 92.7212% ( 17) 00:08:35.377 9275.865 - 9326.277: 92.8422% ( 22) 00:08:35.377 9326.277 - 9376.689: 92.9412% ( 18) 00:08:35.377 9376.689 - 9427.102: 93.0513% ( 20) 00:08:35.377 9427.102 - 9477.514: 93.1613% ( 20) 00:08:35.377 9477.514 - 9527.926: 93.2438% ( 15) 00:08:35.377 9527.926 - 9578.338: 93.3539% ( 20) 00:08:35.377 9578.338 - 9628.751: 93.4474% ( 17) 00:08:35.377 9628.751 - 9679.163: 93.5189% ( 13) 00:08:35.377 9679.163 - 9729.575: 93.6070% ( 16) 00:08:35.377 9729.575 - 9779.988: 93.7060% ( 18) 00:08:35.377 9779.988 - 9830.400: 93.7885% ( 15) 00:08:35.377 9830.400 - 9880.812: 93.8490% ( 11) 00:08:35.377 9880.812 - 9931.225: 93.9371% ( 16) 00:08:35.377 9931.225 - 9981.637: 94.0141% ( 14) 00:08:35.377 9981.637 - 10032.049: 94.1076% ( 17) 00:08:35.377 10032.049 - 10082.462: 94.1791% ( 13) 00:08:35.377 10082.462 - 10132.874: 94.2727% ( 17) 00:08:35.377 10132.874 - 10183.286: 94.3607% ( 16) 00:08:35.377 10183.286 - 10233.698: 94.4927% ( 24) 00:08:35.377 10233.698 - 10284.111: 94.5863% ( 17) 00:08:35.377 10284.111 - 10334.523: 94.6908% ( 19) 00:08:35.377 10334.523 - 10384.935: 94.8008% ( 20) 00:08:35.377 10384.935 - 10435.348: 94.9054% ( 19) 00:08:35.377 10435.348 - 10485.760: 95.0209% ( 21) 00:08:35.377 10485.760 - 10536.172: 95.1419% ( 22) 00:08:35.377 10536.172 - 10586.585: 95.2410% ( 18) 00:08:35.377 10586.585 - 10636.997: 95.3455% ( 19) 00:08:35.377 10636.997 - 10687.409: 95.4555% ( 20) 00:08:35.377 10687.409 - 10737.822: 95.6041% ( 27) 00:08:35.377 10737.822 - 10788.234: 95.7031% ( 18) 00:08:35.377 10788.234 - 10838.646: 95.8077% ( 19) 00:08:35.377 10838.646 - 10889.058: 95.9067% ( 18) 00:08:35.377 10889.058 - 10939.471: 96.0277% ( 22) 00:08:35.377 10939.471 - 10989.883: 96.1378% ( 20) 00:08:35.377 10989.883 - 11040.295: 96.2313% ( 17) 00:08:35.377 11040.295 - 11090.708: 96.3193% ( 16) 00:08:35.377 11090.708 - 11141.120: 96.4018% ( 15) 00:08:35.377 11141.120 - 11191.532: 96.5119% ( 20) 00:08:35.377 11191.532 - 11241.945: 96.5999% ( 16) 00:08:35.377 11241.945 - 11292.357: 96.7210% ( 22) 00:08:35.377 11292.357 - 11342.769: 96.8200% ( 18) 00:08:35.377 11342.769 - 11393.182: 96.9355% ( 21) 00:08:35.377 11393.182 - 11443.594: 97.0346% ( 18) 00:08:35.377 11443.594 - 11494.006: 97.1171% ( 15) 00:08:35.377 11494.006 - 11544.418: 97.2161% ( 18) 00:08:35.377 11544.418 - 11594.831: 97.3096% ( 17) 00:08:35.377 11594.831 - 11645.243: 97.3922% ( 15) 00:08:35.377 11645.243 - 11695.655: 97.4692% ( 14) 00:08:35.377 11695.655 - 11746.068: 97.5517% ( 15) 00:08:35.377 11746.068 - 11796.480: 97.6342% ( 15) 00:08:35.377 11796.480 - 11846.892: 97.7388% ( 19) 00:08:35.377 11846.892 - 11897.305: 97.8268% ( 16) 00:08:35.377 11897.305 - 11947.717: 97.9148% ( 16) 00:08:35.377 11947.717 - 11998.129: 97.9864% ( 13) 00:08:35.377 11998.129 - 12048.542: 98.0634% ( 14) 00:08:35.377 12048.542 - 12098.954: 98.1404% ( 14) 00:08:35.377 12098.954 - 12149.366: 98.2174% ( 14) 00:08:35.377 12149.366 - 12199.778: 98.2724% ( 10) 00:08:35.377 12199.778 - 12250.191: 98.3275% ( 10) 00:08:35.377 12250.191 - 12300.603: 98.3660% ( 7) 00:08:35.377 12300.603 - 12351.015: 98.3990% ( 6) 00:08:35.377 12351.015 - 12401.428: 98.4210% ( 4) 00:08:35.377 12401.428 - 12451.840: 98.4375% ( 3) 00:08:35.377 12451.840 - 12502.252: 98.4540% ( 3) 00:08:35.377 12502.252 - 12552.665: 98.4705% ( 3) 00:08:35.377 12552.665 - 12603.077: 98.4925% ( 4) 00:08:35.377 12603.077 - 12653.489: 98.4980% ( 1) 00:08:35.377 12653.489 - 12703.902: 98.5090% ( 2) 00:08:35.377 12703.902 - 12754.314: 98.5200% ( 2) 00:08:35.377 12754.314 - 12804.726: 98.5365% ( 3) 00:08:35.377 12804.726 - 12855.138: 98.5475% ( 2) 00:08:35.377 12855.138 - 12905.551: 98.5695% ( 4) 00:08:35.377 12905.551 - 13006.375: 98.5971% ( 5) 00:08:35.377 13006.375 - 13107.200: 98.6411% ( 8) 00:08:35.377 13107.200 - 13208.025: 98.6686% ( 5) 00:08:35.377 13208.025 - 13308.849: 98.7126% ( 8) 00:08:35.377 13308.849 - 13409.674: 98.7456% ( 6) 00:08:35.377 13409.674 - 13510.498: 98.7676% ( 4) 00:08:35.377 13510.498 - 13611.323: 98.7896% ( 4) 00:08:35.377 13611.323 - 13712.148: 98.8171% ( 5) 00:08:35.377 13712.148 - 13812.972: 98.8611% ( 8) 00:08:35.377 13812.972 - 13913.797: 98.8886% ( 5) 00:08:35.377 13913.797 - 14014.622: 98.9272% ( 7) 00:08:35.377 14014.622 - 14115.446: 98.9657% ( 7) 00:08:35.377 14115.446 - 14216.271: 98.9987% ( 6) 00:08:35.377 14216.271 - 14317.095: 99.0317% ( 6) 00:08:35.377 14317.095 - 14417.920: 99.0482% ( 3) 00:08:35.377 14417.920 - 14518.745: 99.0647% ( 3) 00:08:35.377 14518.745 - 14619.569: 99.0867% ( 4) 00:08:35.377 14619.569 - 14720.394: 99.0977% ( 2) 00:08:35.377 14720.394 - 14821.218: 99.1142% ( 3) 00:08:35.377 14821.218 - 14922.043: 99.1307% ( 3) 00:08:35.377 14922.043 - 15022.868: 99.1527% ( 4) 00:08:35.377 15022.868 - 15123.692: 99.1692% ( 3) 00:08:35.377 15123.692 - 15224.517: 99.1802% ( 2) 00:08:35.377 15224.517 - 15325.342: 99.1967% ( 3) 00:08:35.377 15325.342 - 15426.166: 99.2132% ( 3) 00:08:35.377 15426.166 - 15526.991: 99.2298% ( 3) 00:08:35.377 15526.991 - 15627.815: 99.2463% ( 3) 00:08:35.377 15627.815 - 15728.640: 99.2683% ( 4) 00:08:35.377 15728.640 - 15829.465: 99.2903% ( 4) 00:08:35.377 15829.465 - 15930.289: 99.2958% ( 1) 00:08:35.377 25609.452 - 25710.277: 99.3123% ( 3) 00:08:35.377 25710.277 - 25811.102: 99.3288% ( 3) 00:08:35.377 25811.102 - 26012.751: 99.3783% ( 9) 00:08:35.377 26012.751 - 26214.400: 99.4223% ( 8) 00:08:35.377 26214.400 - 26416.049: 99.4608% ( 7) 00:08:35.377 26416.049 - 26617.698: 99.4993% ( 7) 00:08:35.377 26617.698 - 26819.348: 99.5434% ( 8) 00:08:35.377 26819.348 - 27020.997: 99.5874% ( 8) 00:08:35.377 27020.997 - 27222.646: 99.6314% ( 8) 00:08:35.377 27222.646 - 27424.295: 99.6754% ( 8) 00:08:35.377 27424.295 - 27625.945: 99.7084% ( 6) 00:08:35.377 27625.945 - 27827.594: 99.7579% ( 9) 00:08:35.377 27827.594 - 28029.243: 99.7964% ( 7) 00:08:35.377 28029.243 - 28230.892: 99.8404% ( 8) 00:08:35.377 28230.892 - 28432.542: 99.8845% ( 8) 00:08:35.377 28432.542 - 28634.191: 99.9285% ( 8) 00:08:35.377 28634.191 - 28835.840: 99.9725% ( 8) 00:08:35.377 28835.840 - 29037.489: 100.0000% ( 5) 00:08:35.377 00:08:35.377 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:35.377 ============================================================================== 00:08:35.377 Range in us Cumulative IO count 00:08:35.377 4915.200 - 4940.406: 0.0275% ( 5) 00:08:35.377 4940.406 - 4965.612: 0.0935% ( 12) 00:08:35.377 4965.612 - 4990.818: 0.1210% ( 5) 00:08:35.377 4990.818 - 5016.025: 0.1375% ( 3) 00:08:35.378 5016.025 - 5041.231: 0.1485% ( 2) 00:08:35.378 5041.231 - 5066.437: 0.1596% ( 2) 00:08:35.378 5066.437 - 5091.643: 0.1761% ( 3) 00:08:35.378 5091.643 - 5116.849: 0.1981% ( 4) 00:08:35.378 5116.849 - 5142.055: 0.2201% ( 4) 00:08:35.378 5142.055 - 5167.262: 0.2366% ( 3) 00:08:35.378 5167.262 - 5192.468: 0.2586% ( 4) 00:08:35.378 5192.468 - 5217.674: 0.3081% ( 9) 00:08:35.378 5217.674 - 5242.880: 0.4842% ( 32) 00:08:35.378 5242.880 - 5268.086: 0.7262% ( 44) 00:08:35.378 5268.086 - 5293.292: 1.0893% ( 66) 00:08:35.378 5293.292 - 5318.498: 1.4360% ( 63) 00:08:35.378 5318.498 - 5343.705: 1.8871% ( 82) 00:08:35.378 5343.705 - 5368.911: 2.4043% ( 94) 00:08:35.378 5368.911 - 5394.117: 2.8664% ( 84) 00:08:35.378 5394.117 - 5419.323: 3.3726% ( 92) 00:08:35.378 5419.323 - 5444.529: 3.9173% ( 99) 00:08:35.378 5444.529 - 5469.735: 4.5114% ( 108) 00:08:35.378 5469.735 - 5494.942: 5.1827% ( 122) 00:08:35.378 5494.942 - 5520.148: 5.8979% ( 130) 00:08:35.378 5520.148 - 5545.354: 6.7121% ( 148) 00:08:35.378 5545.354 - 5570.560: 7.4714% ( 138) 00:08:35.378 5570.560 - 5595.766: 8.3407% ( 158) 00:08:35.378 5595.766 - 5620.972: 9.2099% ( 158) 00:08:35.378 5620.972 - 5646.178: 10.0297% ( 149) 00:08:35.378 5646.178 - 5671.385: 10.8880% ( 156) 00:08:35.378 5671.385 - 5696.591: 11.7518% ( 157) 00:08:35.378 5696.591 - 5721.797: 12.6540% ( 164) 00:08:35.378 5721.797 - 5747.003: 13.6169% ( 175) 00:08:35.378 5747.003 - 5772.209: 14.5246% ( 165) 00:08:35.378 5772.209 - 5797.415: 15.4159% ( 162) 00:08:35.378 5797.415 - 5822.622: 16.3072% ( 162) 00:08:35.378 5822.622 - 5847.828: 17.2645% ( 174) 00:08:35.378 5847.828 - 5873.034: 18.3264% ( 193) 00:08:35.378 5873.034 - 5898.240: 19.3827% ( 192) 00:08:35.378 5898.240 - 5923.446: 20.4280% ( 190) 00:08:35.378 5923.446 - 5948.652: 21.5064% ( 196) 00:08:35.378 5948.652 - 5973.858: 22.5682% ( 193) 00:08:35.378 5973.858 - 5999.065: 23.6521% ( 197) 00:08:35.378 5999.065 - 6024.271: 24.7634% ( 202) 00:08:35.378 6024.271 - 6049.477: 25.8473% ( 197) 00:08:35.378 6049.477 - 6074.683: 26.9366% ( 198) 00:08:35.378 6074.683 - 6099.889: 28.0865% ( 209) 00:08:35.378 6099.889 - 6125.095: 29.1978% ( 202) 00:08:35.378 6125.095 - 6150.302: 30.3312% ( 206) 00:08:35.378 6150.302 - 6175.508: 31.4316% ( 200) 00:08:35.378 6175.508 - 6200.714: 32.5429% ( 202) 00:08:35.378 6200.714 - 6225.920: 33.6708% ( 205) 00:08:35.378 6225.920 - 6251.126: 34.7821% ( 202) 00:08:35.378 6251.126 - 6276.332: 35.8935% ( 202) 00:08:35.378 6276.332 - 6301.538: 37.0213% ( 205) 00:08:35.378 6301.538 - 6326.745: 38.1272% ( 201) 00:08:35.378 6326.745 - 6351.951: 39.2221% ( 199) 00:08:35.378 6351.951 - 6377.157: 40.3499% ( 205) 00:08:35.378 6377.157 - 6402.363: 41.4833% ( 206) 00:08:35.378 6402.363 - 6427.569: 42.6001% ( 203) 00:08:35.378 6427.569 - 6452.775: 43.7445% ( 208) 00:08:35.378 6452.775 - 6503.188: 46.1543% ( 438) 00:08:35.378 6503.188 - 6553.600: 48.4870% ( 424) 00:08:35.378 6553.600 - 6604.012: 50.8088% ( 422) 00:08:35.378 6604.012 - 6654.425: 53.2130% ( 437) 00:08:35.378 6654.425 - 6704.837: 55.5568% ( 426) 00:08:35.378 6704.837 - 6755.249: 57.9886% ( 442) 00:08:35.378 6755.249 - 6805.662: 60.3323% ( 426) 00:08:35.378 6805.662 - 6856.074: 62.7311% ( 436) 00:08:35.378 6856.074 - 6906.486: 65.0473% ( 421) 00:08:35.378 6906.486 - 6956.898: 67.2590% ( 402) 00:08:35.378 6956.898 - 7007.311: 69.1626% ( 346) 00:08:35.378 7007.311 - 7057.723: 70.8517% ( 307) 00:08:35.378 7057.723 - 7108.135: 72.4362% ( 288) 00:08:35.378 7108.135 - 7158.548: 73.8116% ( 250) 00:08:35.378 7158.548 - 7208.960: 75.1210% ( 238) 00:08:35.378 7208.960 - 7259.372: 76.1884% ( 194) 00:08:35.378 7259.372 - 7309.785: 77.2007% ( 184) 00:08:35.378 7309.785 - 7360.197: 78.1965% ( 181) 00:08:35.378 7360.197 - 7410.609: 79.1813% ( 179) 00:08:35.378 7410.609 - 7461.022: 80.1386% ( 174) 00:08:35.378 7461.022 - 7511.434: 80.9749% ( 152) 00:08:35.378 7511.434 - 7561.846: 81.6021% ( 114) 00:08:35.378 7561.846 - 7612.258: 82.1908% ( 107) 00:08:35.378 7612.258 - 7662.671: 82.7025% ( 93) 00:08:35.378 7662.671 - 7713.083: 83.1866% ( 88) 00:08:35.378 7713.083 - 7763.495: 83.6598% ( 86) 00:08:35.378 7763.495 - 7813.908: 84.1054% ( 81) 00:08:35.378 7813.908 - 7864.320: 84.5676% ( 84) 00:08:35.378 7864.320 - 7914.732: 85.0242% ( 83) 00:08:35.378 7914.732 - 7965.145: 85.4699% ( 81) 00:08:35.378 7965.145 - 8015.557: 85.9265% ( 83) 00:08:35.378 8015.557 - 8065.969: 86.3776% ( 82) 00:08:35.378 8065.969 - 8116.382: 86.8288% ( 82) 00:08:35.378 8116.382 - 8166.794: 87.2414% ( 75) 00:08:35.378 8166.794 - 8217.206: 87.6155% ( 68) 00:08:35.378 8217.206 - 8267.618: 87.9732% ( 65) 00:08:35.378 8267.618 - 8318.031: 88.3308% ( 65) 00:08:35.378 8318.031 - 8368.443: 88.6664% ( 61) 00:08:35.378 8368.443 - 8418.855: 89.0020% ( 61) 00:08:35.378 8418.855 - 8469.268: 89.3156% ( 57) 00:08:35.378 8469.268 - 8519.680: 89.6072% ( 53) 00:08:35.378 8519.680 - 8570.092: 89.8933% ( 52) 00:08:35.378 8570.092 - 8620.505: 90.1684% ( 50) 00:08:35.378 8620.505 - 8670.917: 90.4104% ( 44) 00:08:35.378 8670.917 - 8721.329: 90.6030% ( 35) 00:08:35.378 8721.329 - 8771.742: 90.8011% ( 36) 00:08:35.378 8771.742 - 8822.154: 90.9386% ( 25) 00:08:35.378 8822.154 - 8872.566: 91.0706% ( 24) 00:08:35.378 8872.566 - 8922.978: 91.2247% ( 28) 00:08:35.378 8922.978 - 8973.391: 91.3622% ( 25) 00:08:35.378 8973.391 - 9023.803: 91.4888% ( 23) 00:08:35.378 9023.803 - 9074.215: 91.6153% ( 23) 00:08:35.378 9074.215 - 9124.628: 91.7474% ( 24) 00:08:35.378 9124.628 - 9175.040: 91.8684% ( 22) 00:08:35.378 9175.040 - 9225.452: 91.9894% ( 22) 00:08:35.378 9225.452 - 9275.865: 92.0995% ( 20) 00:08:35.378 9275.865 - 9326.277: 92.2040% ( 19) 00:08:35.378 9326.277 - 9376.689: 92.3250% ( 22) 00:08:35.378 9376.689 - 9427.102: 92.4351% ( 20) 00:08:35.378 9427.102 - 9477.514: 92.5561% ( 22) 00:08:35.378 9477.514 - 9527.926: 92.6772% ( 22) 00:08:35.378 9527.926 - 9578.338: 92.7872% ( 20) 00:08:35.378 9578.338 - 9628.751: 92.9247% ( 25) 00:08:35.378 9628.751 - 9679.163: 93.0623% ( 25) 00:08:35.378 9679.163 - 9729.575: 93.2163% ( 28) 00:08:35.378 9729.575 - 9779.988: 93.3649% ( 27) 00:08:35.378 9779.988 - 9830.400: 93.5134% ( 27) 00:08:35.378 9830.400 - 9880.812: 93.7225% ( 38) 00:08:35.378 9880.812 - 9931.225: 93.9206% ( 36) 00:08:35.378 9931.225 - 9981.637: 94.1021% ( 33) 00:08:35.378 9981.637 - 10032.049: 94.2672% ( 30) 00:08:35.378 10032.049 - 10082.462: 94.4377% ( 31) 00:08:35.378 10082.462 - 10132.874: 94.6468% ( 38) 00:08:35.378 10132.874 - 10183.286: 94.8228% ( 32) 00:08:35.378 10183.286 - 10233.698: 94.9879% ( 30) 00:08:35.378 10233.698 - 10284.111: 95.1309% ( 26) 00:08:35.378 10284.111 - 10334.523: 95.2850% ( 28) 00:08:35.378 10334.523 - 10384.935: 95.4170% ( 24) 00:08:35.378 10384.935 - 10435.348: 95.5546% ( 25) 00:08:35.378 10435.348 - 10485.760: 95.6866% ( 24) 00:08:35.378 10485.760 - 10536.172: 95.8077% ( 22) 00:08:35.378 10536.172 - 10586.585: 95.9232% ( 21) 00:08:35.378 10586.585 - 10636.997: 96.0222% ( 18) 00:08:35.378 10636.997 - 10687.409: 96.1378% ( 21) 00:08:35.378 10687.409 - 10737.822: 96.2423% ( 19) 00:08:35.378 10737.822 - 10788.234: 96.3523% ( 20) 00:08:35.378 10788.234 - 10838.646: 96.4624% ( 20) 00:08:35.378 10838.646 - 10889.058: 96.5779% ( 21) 00:08:35.378 10889.058 - 10939.471: 96.6824% ( 19) 00:08:35.378 10939.471 - 10989.883: 96.7925% ( 20) 00:08:35.378 10989.883 - 11040.295: 96.9080% ( 21) 00:08:35.378 11040.295 - 11090.708: 97.0125% ( 19) 00:08:35.378 11090.708 - 11141.120: 97.1226% ( 20) 00:08:35.378 11141.120 - 11191.532: 97.2326% ( 20) 00:08:35.378 11191.532 - 11241.945: 97.3482% ( 21) 00:08:35.378 11241.945 - 11292.357: 97.4582% ( 20) 00:08:35.378 11292.357 - 11342.769: 97.5792% ( 22) 00:08:35.378 11342.769 - 11393.182: 97.7058% ( 23) 00:08:35.378 11393.182 - 11443.594: 97.8323% ( 23) 00:08:35.378 11443.594 - 11494.006: 97.9533% ( 22) 00:08:35.378 11494.006 - 11544.418: 98.0634% ( 20) 00:08:35.378 11544.418 - 11594.831: 98.1404% ( 14) 00:08:35.378 11594.831 - 11645.243: 98.1844% ( 8) 00:08:35.378 11645.243 - 11695.655: 98.2229% ( 7) 00:08:35.378 11695.655 - 11746.068: 98.2669% ( 8) 00:08:35.378 11746.068 - 11796.480: 98.2890% ( 4) 00:08:35.378 11796.480 - 11846.892: 98.3110% ( 4) 00:08:35.378 11846.892 - 11897.305: 98.3330% ( 4) 00:08:35.378 11897.305 - 11947.717: 98.3550% ( 4) 00:08:35.378 11947.717 - 11998.129: 98.3660% ( 2) 00:08:35.378 11998.129 - 12048.542: 98.3825% ( 3) 00:08:35.378 12048.542 - 12098.954: 98.3935% ( 2) 00:08:35.378 12098.954 - 12149.366: 98.4045% ( 2) 00:08:35.378 12149.366 - 12199.778: 98.4155% ( 2) 00:08:35.378 12199.778 - 12250.191: 98.4265% ( 2) 00:08:35.378 12250.191 - 12300.603: 98.4320% ( 1) 00:08:35.378 12300.603 - 12351.015: 98.4430% ( 2) 00:08:35.378 12351.015 - 12401.428: 98.4540% ( 2) 00:08:35.378 12401.428 - 12451.840: 98.4595% ( 1) 00:08:35.378 12451.840 - 12502.252: 98.4705% ( 2) 00:08:35.378 12502.252 - 12552.665: 98.4815% ( 2) 00:08:35.378 12552.665 - 12603.077: 98.4925% ( 2) 00:08:35.378 12603.077 - 12653.489: 98.4980% ( 1) 00:08:35.378 12653.489 - 12703.902: 98.5090% ( 2) 00:08:35.378 12703.902 - 12754.314: 98.5200% ( 2) 00:08:35.378 12754.314 - 12804.726: 98.5310% ( 2) 00:08:35.378 12804.726 - 12855.138: 98.5420% ( 2) 00:08:35.378 12855.138 - 12905.551: 98.5640% ( 4) 00:08:35.378 12905.551 - 13006.375: 98.6301% ( 12) 00:08:35.378 13006.375 - 13107.200: 98.6796% ( 9) 00:08:35.378 13107.200 - 13208.025: 98.6906% ( 2) 00:08:35.378 13208.025 - 13308.849: 98.7071% ( 3) 00:08:35.378 13308.849 - 13409.674: 98.7291% ( 4) 00:08:35.378 13409.674 - 13510.498: 98.7511% ( 4) 00:08:35.379 13510.498 - 13611.323: 98.7731% ( 4) 00:08:35.379 13611.323 - 13712.148: 98.7896% ( 3) 00:08:35.379 13712.148 - 13812.972: 98.8116% ( 4) 00:08:35.379 13812.972 - 13913.797: 98.8336% ( 4) 00:08:35.379 13913.797 - 14014.622: 98.8556% ( 4) 00:08:35.379 14014.622 - 14115.446: 98.8776% ( 4) 00:08:35.379 14115.446 - 14216.271: 98.8941% ( 3) 00:08:35.379 14216.271 - 14317.095: 98.9162% ( 4) 00:08:35.379 14317.095 - 14417.920: 98.9382% ( 4) 00:08:35.379 14417.920 - 14518.745: 98.9602% ( 4) 00:08:35.379 14518.745 - 14619.569: 98.9712% ( 2) 00:08:35.379 14619.569 - 14720.394: 98.9932% ( 4) 00:08:35.379 14720.394 - 14821.218: 99.0152% ( 4) 00:08:35.379 14821.218 - 14922.043: 99.0372% ( 4) 00:08:35.379 14922.043 - 15022.868: 99.0537% ( 3) 00:08:35.379 15022.868 - 15123.692: 99.0867% ( 6) 00:08:35.379 15123.692 - 15224.517: 99.1252% ( 7) 00:08:35.379 15224.517 - 15325.342: 99.1692% ( 8) 00:08:35.379 15325.342 - 15426.166: 99.2077% ( 7) 00:08:35.379 15426.166 - 15526.991: 99.2518% ( 8) 00:08:35.379 15526.991 - 15627.815: 99.2958% ( 8) 00:08:35.379 24399.557 - 24500.382: 99.3123% ( 3) 00:08:35.379 24500.382 - 24601.206: 99.3343% ( 4) 00:08:35.379 24601.206 - 24702.031: 99.3563% ( 4) 00:08:35.379 24702.031 - 24802.855: 99.3783% ( 4) 00:08:35.379 24802.855 - 24903.680: 99.4003% ( 4) 00:08:35.379 24903.680 - 25004.505: 99.4223% ( 4) 00:08:35.379 25004.505 - 25105.329: 99.4443% ( 4) 00:08:35.379 25105.329 - 25206.154: 99.4718% ( 5) 00:08:35.379 25206.154 - 25306.978: 99.4938% ( 4) 00:08:35.379 25306.978 - 25407.803: 99.5158% ( 4) 00:08:35.379 25407.803 - 25508.628: 99.5379% ( 4) 00:08:35.379 25508.628 - 25609.452: 99.5599% ( 4) 00:08:35.379 25609.452 - 25710.277: 99.5819% ( 4) 00:08:35.379 25710.277 - 25811.102: 99.6094% ( 5) 00:08:35.379 25811.102 - 26012.751: 99.6534% ( 8) 00:08:35.379 26012.751 - 26214.400: 99.6974% ( 8) 00:08:35.379 26214.400 - 26416.049: 99.7414% ( 8) 00:08:35.379 26416.049 - 26617.698: 99.7909% ( 9) 00:08:35.379 26617.698 - 26819.348: 99.8349% ( 8) 00:08:35.379 26819.348 - 27020.997: 99.8845% ( 9) 00:08:35.379 27020.997 - 27222.646: 99.9285% ( 8) 00:08:35.379 27222.646 - 27424.295: 99.9725% ( 8) 00:08:35.379 27424.295 - 27625.945: 100.0000% ( 5) 00:08:35.379 00:08:35.379 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:35.379 ============================================================================== 00:08:35.379 Range in us Cumulative IO count 00:08:35.379 5142.055 - 5167.262: 0.0110% ( 2) 00:08:35.379 5167.262 - 5192.468: 0.0495% ( 7) 00:08:35.379 5192.468 - 5217.674: 0.1100% ( 11) 00:08:35.379 5217.674 - 5242.880: 0.2751% ( 30) 00:08:35.379 5242.880 - 5268.086: 0.5007% ( 41) 00:08:35.379 5268.086 - 5293.292: 0.6877% ( 34) 00:08:35.379 5293.292 - 5318.498: 0.9298% ( 44) 00:08:35.379 5318.498 - 5343.705: 1.3534% ( 77) 00:08:35.379 5343.705 - 5368.911: 1.8871% ( 97) 00:08:35.379 5368.911 - 5394.117: 2.4263% ( 98) 00:08:35.379 5394.117 - 5419.323: 3.0700% ( 117) 00:08:35.379 5419.323 - 5444.529: 3.8237% ( 137) 00:08:35.379 5444.529 - 5469.735: 4.4454% ( 113) 00:08:35.379 5469.735 - 5494.942: 5.0396% ( 108) 00:08:35.379 5494.942 - 5520.148: 5.6778% ( 116) 00:08:35.379 5520.148 - 5545.354: 6.3985% ( 131) 00:08:35.379 5545.354 - 5570.560: 7.1798% ( 142) 00:08:35.379 5570.560 - 5595.766: 7.9721% ( 144) 00:08:35.379 5595.766 - 5620.972: 8.8028% ( 151) 00:08:35.379 5620.972 - 5646.178: 9.6666% ( 157) 00:08:35.379 5646.178 - 5671.385: 10.5359% ( 158) 00:08:35.379 5671.385 - 5696.591: 11.4602% ( 168) 00:08:35.379 5696.591 - 5721.797: 12.3955% ( 170) 00:08:35.379 5721.797 - 5747.003: 13.3363% ( 171) 00:08:35.379 5747.003 - 5772.209: 14.2386% ( 164) 00:08:35.379 5772.209 - 5797.415: 15.1243% ( 161) 00:08:35.379 5797.415 - 5822.622: 16.0486% ( 168) 00:08:35.379 5822.622 - 5847.828: 16.9564% ( 165) 00:08:35.379 5847.828 - 5873.034: 17.9688% ( 184) 00:08:35.379 5873.034 - 5898.240: 18.9976% ( 187) 00:08:35.379 5898.240 - 5923.446: 20.1199% ( 204) 00:08:35.379 5923.446 - 5948.652: 21.2148% ( 199) 00:08:35.379 5948.652 - 5973.858: 22.2821% ( 194) 00:08:35.379 5973.858 - 5999.065: 23.3770% ( 199) 00:08:35.379 5999.065 - 6024.271: 24.4883% ( 202) 00:08:35.379 6024.271 - 6049.477: 25.5997% ( 202) 00:08:35.379 6049.477 - 6074.683: 26.7165% ( 203) 00:08:35.379 6074.683 - 6099.889: 27.8334% ( 203) 00:08:35.379 6099.889 - 6125.095: 28.9503% ( 203) 00:08:35.379 6125.095 - 6150.302: 30.0726% ( 204) 00:08:35.379 6150.302 - 6175.508: 31.2170% ( 208) 00:08:35.379 6175.508 - 6200.714: 32.3338% ( 203) 00:08:35.379 6200.714 - 6225.920: 33.5167% ( 215) 00:08:35.379 6225.920 - 6251.126: 34.6996% ( 215) 00:08:35.379 6251.126 - 6276.332: 35.8880% ( 216) 00:08:35.379 6276.332 - 6301.538: 37.0709% ( 215) 00:08:35.379 6301.538 - 6326.745: 38.2647% ( 217) 00:08:35.379 6326.745 - 6351.951: 39.4311% ( 212) 00:08:35.379 6351.951 - 6377.157: 40.6360% ( 219) 00:08:35.379 6377.157 - 6402.363: 41.8409% ( 219) 00:08:35.379 6402.363 - 6427.569: 43.0568% ( 221) 00:08:35.379 6427.569 - 6452.775: 44.2452% ( 216) 00:08:35.379 6452.775 - 6503.188: 46.6659% ( 440) 00:08:35.379 6503.188 - 6553.600: 49.0922% ( 441) 00:08:35.379 6553.600 - 6604.012: 51.4855% ( 435) 00:08:35.379 6604.012 - 6654.425: 53.8622% ( 432) 00:08:35.379 6654.425 - 6704.837: 56.3050% ( 444) 00:08:35.379 6704.837 - 6755.249: 58.7588% ( 446) 00:08:35.379 6755.249 - 6805.662: 61.2566% ( 454) 00:08:35.379 6805.662 - 6856.074: 63.6884% ( 442) 00:08:35.379 6856.074 - 6906.486: 65.8891% ( 400) 00:08:35.379 6906.486 - 6956.898: 68.0128% ( 386) 00:08:35.379 6956.898 - 7007.311: 69.9769% ( 357) 00:08:35.379 7007.311 - 7057.723: 71.6769% ( 309) 00:08:35.379 7057.723 - 7108.135: 73.2614% ( 288) 00:08:35.379 7108.135 - 7158.548: 74.6919% ( 260) 00:08:35.379 7158.548 - 7208.960: 75.9573% ( 230) 00:08:35.379 7208.960 - 7259.372: 77.0687% ( 202) 00:08:35.379 7259.372 - 7309.785: 78.1140% ( 190) 00:08:35.379 7309.785 - 7360.197: 79.1043% ( 180) 00:08:35.379 7360.197 - 7410.609: 80.0231% ( 167) 00:08:35.379 7410.609 - 7461.022: 80.9309% ( 165) 00:08:35.379 7461.022 - 7511.434: 81.7121% ( 142) 00:08:35.379 7511.434 - 7561.846: 82.3338% ( 113) 00:08:35.379 7561.846 - 7612.258: 82.8840% ( 100) 00:08:35.379 7612.258 - 7662.671: 83.4122% ( 96) 00:08:35.379 7662.671 - 7713.083: 83.9239% ( 93) 00:08:35.379 7713.083 - 7763.495: 84.3860% ( 84) 00:08:35.379 7763.495 - 7813.908: 84.8371% ( 82) 00:08:35.379 7813.908 - 7864.320: 85.3213% ( 88) 00:08:35.379 7864.320 - 7914.732: 85.7835% ( 84) 00:08:35.379 7914.732 - 7965.145: 86.2016% ( 76) 00:08:35.379 7965.145 - 8015.557: 86.5977% ( 72) 00:08:35.379 8015.557 - 8065.969: 86.9663% ( 67) 00:08:35.379 8065.969 - 8116.382: 87.3404% ( 68) 00:08:35.379 8116.382 - 8166.794: 87.7256% ( 70) 00:08:35.379 8166.794 - 8217.206: 88.1162% ( 71) 00:08:35.379 8217.206 - 8267.618: 88.5123% ( 72) 00:08:35.379 8267.618 - 8318.031: 88.9305% ( 76) 00:08:35.379 8318.031 - 8368.443: 89.3431% ( 75) 00:08:35.379 8368.443 - 8418.855: 89.7337% ( 71) 00:08:35.379 8418.855 - 8469.268: 90.1023% ( 67) 00:08:35.379 8469.268 - 8519.680: 90.4159% ( 57) 00:08:35.379 8519.680 - 8570.092: 90.7185% ( 55) 00:08:35.379 8570.092 - 8620.505: 91.0046% ( 52) 00:08:35.379 8620.505 - 8670.917: 91.2357% ( 42) 00:08:35.379 8670.917 - 8721.329: 91.4062% ( 31) 00:08:35.379 8721.329 - 8771.742: 91.5438% ( 25) 00:08:35.379 8771.742 - 8822.154: 91.6538% ( 20) 00:08:35.379 8822.154 - 8872.566: 91.7694% ( 21) 00:08:35.379 8872.566 - 8922.978: 91.8959% ( 23) 00:08:35.379 8922.978 - 8973.391: 92.0114% ( 21) 00:08:35.379 8973.391 - 9023.803: 92.1380% ( 23) 00:08:35.379 9023.803 - 9074.215: 92.2700% ( 24) 00:08:35.379 9074.215 - 9124.628: 92.4021% ( 24) 00:08:35.379 9124.628 - 9175.040: 92.5396% ( 25) 00:08:35.379 9175.040 - 9225.452: 92.6882% ( 27) 00:08:35.379 9225.452 - 9275.865: 92.8092% ( 22) 00:08:35.379 9275.865 - 9326.277: 92.9302% ( 22) 00:08:35.379 9326.277 - 9376.689: 93.0348% ( 19) 00:08:35.380 9376.689 - 9427.102: 93.1503% ( 21) 00:08:35.380 9427.102 - 9477.514: 93.2493% ( 18) 00:08:35.380 9477.514 - 9527.926: 93.3484% ( 18) 00:08:35.380 9527.926 - 9578.338: 93.4254% ( 14) 00:08:35.380 9578.338 - 9628.751: 93.4804% ( 10) 00:08:35.380 9628.751 - 9679.163: 93.5354% ( 10) 00:08:35.380 9679.163 - 9729.575: 93.5960% ( 11) 00:08:35.380 9729.575 - 9779.988: 93.7005% ( 19) 00:08:35.380 9779.988 - 9830.400: 93.7995% ( 18) 00:08:35.380 9830.400 - 9880.812: 93.8985% ( 18) 00:08:35.380 9880.812 - 9931.225: 94.0031% ( 19) 00:08:35.380 9931.225 - 9981.637: 94.0911% ( 16) 00:08:35.380 9981.637 - 10032.049: 94.1901% ( 18) 00:08:35.380 10032.049 - 10082.462: 94.2727% ( 15) 00:08:35.380 10082.462 - 10132.874: 94.3552% ( 15) 00:08:35.380 10132.874 - 10183.286: 94.4377% ( 15) 00:08:35.380 10183.286 - 10233.698: 94.5147% ( 14) 00:08:35.380 10233.698 - 10284.111: 94.5973% ( 15) 00:08:35.380 10284.111 - 10334.523: 94.6908% ( 17) 00:08:35.380 10334.523 - 10384.935: 94.7678% ( 14) 00:08:35.380 10384.935 - 10435.348: 94.8559% ( 16) 00:08:35.380 10435.348 - 10485.760: 94.9384% ( 15) 00:08:35.380 10485.760 - 10536.172: 95.0154% ( 14) 00:08:35.380 10536.172 - 10586.585: 95.1419% ( 23) 00:08:35.380 10586.585 - 10636.997: 95.2520% ( 20) 00:08:35.380 10636.997 - 10687.409: 95.3400% ( 16) 00:08:35.380 10687.409 - 10737.822: 95.4500% ( 20) 00:08:35.380 10737.822 - 10788.234: 95.5601% ( 20) 00:08:35.380 10788.234 - 10838.646: 95.6646% ( 19) 00:08:35.380 10838.646 - 10889.058: 95.7636% ( 18) 00:08:35.380 10889.058 - 10939.471: 95.8627% ( 18) 00:08:35.380 10939.471 - 10989.883: 95.9672% ( 19) 00:08:35.380 10989.883 - 11040.295: 96.0717% ( 19) 00:08:35.380 11040.295 - 11090.708: 96.1818% ( 20) 00:08:35.380 11090.708 - 11141.120: 96.2863% ( 19) 00:08:35.380 11141.120 - 11191.532: 96.3908% ( 19) 00:08:35.380 11191.532 - 11241.945: 96.4954% ( 19) 00:08:35.380 11241.945 - 11292.357: 96.5999% ( 19) 00:08:35.380 11292.357 - 11342.769: 96.6989% ( 18) 00:08:35.380 11342.769 - 11393.182: 96.8090% ( 20) 00:08:35.380 11393.182 - 11443.594: 96.8970% ( 16) 00:08:35.380 11443.594 - 11494.006: 96.9850% ( 16) 00:08:35.380 11494.006 - 11544.418: 97.0676% ( 15) 00:08:35.380 11544.418 - 11594.831: 97.1501% ( 15) 00:08:35.380 11594.831 - 11645.243: 97.2271% ( 14) 00:08:35.380 11645.243 - 11695.655: 97.2931% ( 12) 00:08:35.380 11695.655 - 11746.068: 97.3592% ( 12) 00:08:35.380 11746.068 - 11796.480: 97.4252% ( 12) 00:08:35.380 11796.480 - 11846.892: 97.4747% ( 9) 00:08:35.380 11846.892 - 11897.305: 97.5407% ( 12) 00:08:35.380 11897.305 - 11947.717: 97.6067% ( 12) 00:08:35.380 11947.717 - 11998.129: 97.6452% ( 7) 00:08:35.380 11998.129 - 12048.542: 97.7003% ( 10) 00:08:35.380 12048.542 - 12098.954: 97.7663% ( 12) 00:08:35.380 12098.954 - 12149.366: 97.8158% ( 9) 00:08:35.380 12149.366 - 12199.778: 97.8488% ( 6) 00:08:35.380 12199.778 - 12250.191: 97.8818% ( 6) 00:08:35.380 12250.191 - 12300.603: 97.9093% ( 5) 00:08:35.380 12300.603 - 12351.015: 97.9423% ( 6) 00:08:35.380 12351.015 - 12401.428: 97.9754% ( 6) 00:08:35.380 12401.428 - 12451.840: 98.0084% ( 6) 00:08:35.380 12451.840 - 12502.252: 98.0414% ( 6) 00:08:35.380 12502.252 - 12552.665: 98.0579% ( 3) 00:08:35.380 12552.665 - 12603.077: 98.0854% ( 5) 00:08:35.380 12603.077 - 12653.489: 98.1074% ( 4) 00:08:35.380 12653.489 - 12703.902: 98.1294% ( 4) 00:08:35.380 12703.902 - 12754.314: 98.1514% ( 4) 00:08:35.380 12754.314 - 12804.726: 98.1734% ( 4) 00:08:35.380 12804.726 - 12855.138: 98.1899% ( 3) 00:08:35.380 12855.138 - 12905.551: 98.2119% ( 4) 00:08:35.380 12905.551 - 13006.375: 98.2559% ( 8) 00:08:35.380 13006.375 - 13107.200: 98.3000% ( 8) 00:08:35.380 13107.200 - 13208.025: 98.3385% ( 7) 00:08:35.380 13208.025 - 13308.849: 98.3825% ( 8) 00:08:35.380 13308.849 - 13409.674: 98.4870% ( 19) 00:08:35.380 13409.674 - 13510.498: 98.5530% ( 12) 00:08:35.380 13510.498 - 13611.323: 98.6411% ( 16) 00:08:35.380 13611.323 - 13712.148: 98.7456% ( 19) 00:08:35.380 13712.148 - 13812.972: 98.8281% ( 15) 00:08:35.380 13812.972 - 13913.797: 98.8666% ( 7) 00:08:35.380 13913.797 - 14014.622: 98.9051% ( 7) 00:08:35.380 14014.622 - 14115.446: 98.9492% ( 8) 00:08:35.380 14115.446 - 14216.271: 98.9877% ( 7) 00:08:35.380 14216.271 - 14317.095: 99.0317% ( 8) 00:08:35.380 14317.095 - 14417.920: 99.0757% ( 8) 00:08:35.380 14417.920 - 14518.745: 99.1197% ( 8) 00:08:35.380 14518.745 - 14619.569: 99.1582% ( 7) 00:08:35.380 14619.569 - 14720.394: 99.2022% ( 8) 00:08:35.380 14720.394 - 14821.218: 99.2463% ( 8) 00:08:35.380 14821.218 - 14922.043: 99.2793% ( 6) 00:08:35.380 14922.043 - 15022.868: 99.2958% ( 3) 00:08:35.380 23895.434 - 23996.258: 99.3178% ( 4) 00:08:35.380 23996.258 - 24097.083: 99.3343% ( 3) 00:08:35.380 24097.083 - 24197.908: 99.3618% ( 5) 00:08:35.380 24197.908 - 24298.732: 99.3838% ( 4) 00:08:35.380 24298.732 - 24399.557: 99.4058% ( 4) 00:08:35.380 24399.557 - 24500.382: 99.4278% ( 4) 00:08:35.380 24500.382 - 24601.206: 99.4553% ( 5) 00:08:35.380 24601.206 - 24702.031: 99.4773% ( 4) 00:08:35.380 24702.031 - 24802.855: 99.4993% ( 4) 00:08:35.380 24802.855 - 24903.680: 99.5213% ( 4) 00:08:35.380 24903.680 - 25004.505: 99.5489% ( 5) 00:08:35.380 25004.505 - 25105.329: 99.5709% ( 4) 00:08:35.380 25105.329 - 25206.154: 99.5929% ( 4) 00:08:35.380 25206.154 - 25306.978: 99.6149% ( 4) 00:08:35.380 25306.978 - 25407.803: 99.6369% ( 4) 00:08:35.380 25407.803 - 25508.628: 99.6589% ( 4) 00:08:35.380 25508.628 - 25609.452: 99.6864% ( 5) 00:08:35.380 25609.452 - 25710.277: 99.7084% ( 4) 00:08:35.380 25710.277 - 25811.102: 99.7304% ( 4) 00:08:35.380 25811.102 - 26012.751: 99.7799% ( 9) 00:08:35.380 26012.751 - 26214.400: 99.8239% ( 8) 00:08:35.380 26214.400 - 26416.049: 99.8735% ( 9) 00:08:35.380 26416.049 - 26617.698: 99.9175% ( 8) 00:08:35.380 26617.698 - 26819.348: 99.9615% ( 8) 00:08:35.380 26819.348 - 27020.997: 100.0000% ( 7) 00:08:35.380 00:08:35.380 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:35.380 ============================================================================== 00:08:35.380 Range in us Cumulative IO count 00:08:35.380 5142.055 - 5167.262: 0.0275% ( 5) 00:08:35.380 5167.262 - 5192.468: 0.0495% ( 4) 00:08:35.380 5192.468 - 5217.674: 0.0990% ( 9) 00:08:35.380 5217.674 - 5242.880: 0.2036% ( 19) 00:08:35.380 5242.880 - 5268.086: 0.4236% ( 40) 00:08:35.380 5268.086 - 5293.292: 0.7207% ( 54) 00:08:35.380 5293.292 - 5318.498: 1.0783% ( 65) 00:08:35.380 5318.498 - 5343.705: 1.5460% ( 85) 00:08:35.380 5343.705 - 5368.911: 2.0301% ( 88) 00:08:35.380 5368.911 - 5394.117: 2.6133% ( 106) 00:08:35.380 5394.117 - 5419.323: 3.1855% ( 104) 00:08:35.380 5419.323 - 5444.529: 3.7577% ( 104) 00:08:35.380 5444.529 - 5469.735: 4.2529% ( 90) 00:08:35.380 5469.735 - 5494.942: 4.9406% ( 125) 00:08:35.380 5494.942 - 5520.148: 5.6503% ( 129) 00:08:35.380 5520.148 - 5545.354: 6.4426% ( 144) 00:08:35.380 5545.354 - 5570.560: 7.2018% ( 138) 00:08:35.380 5570.560 - 5595.766: 7.9996% ( 145) 00:08:35.380 5595.766 - 5620.972: 8.8248% ( 150) 00:08:35.380 5620.972 - 5646.178: 9.6556% ( 151) 00:08:35.380 5646.178 - 5671.385: 10.5304% ( 159) 00:08:35.380 5671.385 - 5696.591: 11.4327% ( 164) 00:08:35.380 5696.591 - 5721.797: 12.3625% ( 169) 00:08:35.380 5721.797 - 5747.003: 13.2592% ( 163) 00:08:35.380 5747.003 - 5772.209: 14.1395% ( 160) 00:08:35.380 5772.209 - 5797.415: 15.0913% ( 173) 00:08:35.380 5797.415 - 5822.622: 16.0431% ( 173) 00:08:35.380 5822.622 - 5847.828: 16.9784% ( 170) 00:08:35.380 5847.828 - 5873.034: 17.9798% ( 182) 00:08:35.380 5873.034 - 5898.240: 19.0086% ( 187) 00:08:35.380 5898.240 - 5923.446: 20.1199% ( 202) 00:08:35.380 5923.446 - 5948.652: 21.2203% ( 200) 00:08:35.380 5948.652 - 5973.858: 22.3206% ( 200) 00:08:35.380 5973.858 - 5999.065: 23.4320% ( 202) 00:08:35.380 5999.065 - 6024.271: 24.5544% ( 204) 00:08:35.380 6024.271 - 6049.477: 25.6547% ( 200) 00:08:35.380 6049.477 - 6074.683: 26.8211% ( 212) 00:08:35.380 6074.683 - 6099.889: 27.9544% ( 206) 00:08:35.380 6099.889 - 6125.095: 29.1318% ( 214) 00:08:35.380 6125.095 - 6150.302: 30.2762% ( 208) 00:08:35.380 6150.302 - 6175.508: 31.4096% ( 206) 00:08:35.380 6175.508 - 6200.714: 32.4879% ( 196) 00:08:35.380 6200.714 - 6225.920: 33.6763% ( 216) 00:08:35.380 6225.920 - 6251.126: 34.8371% ( 211) 00:08:35.380 6251.126 - 6276.332: 36.0090% ( 213) 00:08:35.380 6276.332 - 6301.538: 37.1974% ( 216) 00:08:35.380 6301.538 - 6326.745: 38.3638% ( 212) 00:08:35.380 6326.745 - 6351.951: 39.5467% ( 215) 00:08:35.380 6351.951 - 6377.157: 40.7240% ( 214) 00:08:35.380 6377.157 - 6402.363: 41.9564% ( 224) 00:08:35.380 6402.363 - 6427.569: 43.1228% ( 212) 00:08:35.380 6427.569 - 6452.775: 44.3112% ( 216) 00:08:35.380 6452.775 - 6503.188: 46.6879% ( 432) 00:08:35.380 6503.188 - 6553.600: 49.0537% ( 430) 00:08:35.380 6553.600 - 6604.012: 51.4745% ( 440) 00:08:35.380 6604.012 - 6654.425: 53.9007% ( 441) 00:08:35.380 6654.425 - 6704.837: 56.4206% ( 458) 00:08:35.380 6704.837 - 6755.249: 58.8578% ( 443) 00:08:35.380 6755.249 - 6805.662: 61.2786% ( 440) 00:08:35.380 6805.662 - 6856.074: 63.7379% ( 447) 00:08:35.380 6856.074 - 6906.486: 66.0871% ( 427) 00:08:35.380 6906.486 - 6956.898: 68.2383% ( 391) 00:08:35.380 6956.898 - 7007.311: 70.0869% ( 336) 00:08:35.380 7007.311 - 7057.723: 71.8310% ( 317) 00:08:35.380 7057.723 - 7108.135: 73.3770% ( 281) 00:08:35.380 7108.135 - 7158.548: 74.7634% ( 252) 00:08:35.380 7158.548 - 7208.960: 75.9848% ( 222) 00:08:35.380 7208.960 - 7259.372: 77.1347% ( 209) 00:08:35.380 7259.372 - 7309.785: 78.1910% ( 192) 00:08:35.380 7309.785 - 7360.197: 79.2474% ( 192) 00:08:35.380 7360.197 - 7410.609: 80.2212% ( 177) 00:08:35.381 7410.609 - 7461.022: 81.1180% ( 163) 00:08:35.381 7461.022 - 7511.434: 81.9542% ( 152) 00:08:35.381 7511.434 - 7561.846: 82.6364% ( 124) 00:08:35.381 7561.846 - 7612.258: 83.2581% ( 113) 00:08:35.381 7612.258 - 7662.671: 83.7973% ( 98) 00:08:35.381 7662.671 - 7713.083: 84.2870% ( 89) 00:08:35.381 7713.083 - 7763.495: 84.7436% ( 83) 00:08:35.381 7763.495 - 7813.908: 85.1893% ( 81) 00:08:35.381 7813.908 - 7864.320: 85.6184% ( 78) 00:08:35.381 7864.320 - 7914.732: 86.0420% ( 77) 00:08:35.381 7914.732 - 7965.145: 86.4767% ( 79) 00:08:35.381 7965.145 - 8015.557: 86.9003% ( 77) 00:08:35.381 8015.557 - 8065.969: 87.3294% ( 78) 00:08:35.381 8065.969 - 8116.382: 87.7751% ( 81) 00:08:35.381 8116.382 - 8166.794: 88.2097% ( 79) 00:08:35.381 8166.794 - 8217.206: 88.6444% ( 79) 00:08:35.381 8217.206 - 8267.618: 89.0350% ( 71) 00:08:35.381 8267.618 - 8318.031: 89.4256% ( 71) 00:08:35.381 8318.031 - 8368.443: 89.8382% ( 75) 00:08:35.381 8368.443 - 8418.855: 90.2069% ( 67) 00:08:35.381 8418.855 - 8469.268: 90.5755% ( 67) 00:08:35.381 8469.268 - 8519.680: 90.9496% ( 68) 00:08:35.381 8519.680 - 8570.092: 91.3017% ( 64) 00:08:35.381 8570.092 - 8620.505: 91.6648% ( 66) 00:08:35.381 8620.505 - 8670.917: 91.9729% ( 56) 00:08:35.381 8670.917 - 8721.329: 92.1875% ( 39) 00:08:35.381 8721.329 - 8771.742: 92.3581% ( 31) 00:08:35.381 8771.742 - 8822.154: 92.5176% ( 29) 00:08:35.381 8822.154 - 8872.566: 92.6717% ( 28) 00:08:35.381 8872.566 - 8922.978: 92.8037% ( 24) 00:08:35.381 8922.978 - 8973.391: 92.9247% ( 22) 00:08:35.381 8973.391 - 9023.803: 93.0568% ( 24) 00:08:35.381 9023.803 - 9074.215: 93.1778% ( 22) 00:08:35.381 9074.215 - 9124.628: 93.2824% ( 19) 00:08:35.381 9124.628 - 9175.040: 93.3979% ( 21) 00:08:35.381 9175.040 - 9225.452: 93.4914% ( 17) 00:08:35.381 9225.452 - 9275.865: 93.5794% ( 16) 00:08:35.381 9275.865 - 9326.277: 93.6730% ( 17) 00:08:35.381 9326.277 - 9376.689: 93.7720% ( 18) 00:08:35.381 9376.689 - 9427.102: 93.8600% ( 16) 00:08:35.381 9427.102 - 9477.514: 93.9481% ( 16) 00:08:35.381 9477.514 - 9527.926: 94.0361% ( 16) 00:08:35.381 9527.926 - 9578.338: 94.1241% ( 16) 00:08:35.381 9578.338 - 9628.751: 94.2176% ( 17) 00:08:35.381 9628.751 - 9679.163: 94.3057% ( 16) 00:08:35.381 9679.163 - 9729.575: 94.3992% ( 17) 00:08:35.381 9729.575 - 9779.988: 94.4872% ( 16) 00:08:35.381 9779.988 - 9830.400: 94.5643% ( 14) 00:08:35.381 9830.400 - 9880.812: 94.6303% ( 12) 00:08:35.381 9880.812 - 9931.225: 94.7073% ( 14) 00:08:35.381 9931.225 - 9981.637: 94.7568% ( 9) 00:08:35.381 9981.637 - 10032.049: 94.7953% ( 7) 00:08:35.381 10032.049 - 10082.462: 94.8228% ( 5) 00:08:35.381 10082.462 - 10132.874: 94.8559% ( 6) 00:08:35.381 10132.874 - 10183.286: 94.8724% ( 3) 00:08:35.381 10183.286 - 10233.698: 94.8999% ( 5) 00:08:35.381 10233.698 - 10284.111: 94.9274% ( 5) 00:08:35.381 10284.111 - 10334.523: 94.9549% ( 5) 00:08:35.381 10334.523 - 10384.935: 94.9714% ( 3) 00:08:35.381 10384.935 - 10435.348: 94.9824% ( 2) 00:08:35.381 10435.348 - 10485.760: 94.9934% ( 2) 00:08:35.381 10485.760 - 10536.172: 95.0099% ( 3) 00:08:35.381 10536.172 - 10586.585: 95.0429% ( 6) 00:08:35.381 10586.585 - 10636.997: 95.0759% ( 6) 00:08:35.381 10636.997 - 10687.409: 95.1034% ( 5) 00:08:35.381 10687.409 - 10737.822: 95.1364% ( 6) 00:08:35.381 10737.822 - 10788.234: 95.1750% ( 7) 00:08:35.381 10788.234 - 10838.646: 95.2080% ( 6) 00:08:35.381 10838.646 - 10889.058: 95.2685% ( 11) 00:08:35.381 10889.058 - 10939.471: 95.3180% ( 9) 00:08:35.381 10939.471 - 10989.883: 95.3455% ( 5) 00:08:35.381 10989.883 - 11040.295: 95.3730% ( 5) 00:08:35.381 11040.295 - 11090.708: 95.4005% ( 5) 00:08:35.381 11090.708 - 11141.120: 95.4335% ( 6) 00:08:35.381 11141.120 - 11191.532: 95.4610% ( 5) 00:08:35.381 11191.532 - 11241.945: 95.4941% ( 6) 00:08:35.381 11241.945 - 11292.357: 95.5216% ( 5) 00:08:35.381 11292.357 - 11342.769: 95.5546% ( 6) 00:08:35.381 11342.769 - 11393.182: 95.5931% ( 7) 00:08:35.381 11393.182 - 11443.594: 95.6371% ( 8) 00:08:35.381 11443.594 - 11494.006: 95.6921% ( 10) 00:08:35.381 11494.006 - 11544.418: 95.7691% ( 14) 00:08:35.381 11544.418 - 11594.831: 95.8352% ( 12) 00:08:35.381 11594.831 - 11645.243: 95.9122% ( 14) 00:08:35.381 11645.243 - 11695.655: 95.9947% ( 15) 00:08:35.381 11695.655 - 11746.068: 96.0772% ( 15) 00:08:35.381 11746.068 - 11796.480: 96.1598% ( 15) 00:08:35.381 11796.480 - 11846.892: 96.2478% ( 16) 00:08:35.381 11846.892 - 11897.305: 96.3358% ( 16) 00:08:35.381 11897.305 - 11947.717: 96.4184% ( 15) 00:08:35.381 11947.717 - 11998.129: 96.5064% ( 16) 00:08:35.381 11998.129 - 12048.542: 96.5834% ( 14) 00:08:35.381 12048.542 - 12098.954: 96.6714% ( 16) 00:08:35.381 12098.954 - 12149.366: 96.7595% ( 16) 00:08:35.381 12149.366 - 12199.778: 96.8365% ( 14) 00:08:35.381 12199.778 - 12250.191: 96.9025% ( 12) 00:08:35.381 12250.191 - 12300.603: 96.9685% ( 12) 00:08:35.381 12300.603 - 12351.015: 97.0346% ( 12) 00:08:35.381 12351.015 - 12401.428: 97.1006% ( 12) 00:08:35.381 12401.428 - 12451.840: 97.1666% ( 12) 00:08:35.381 12451.840 - 12502.252: 97.2271% ( 11) 00:08:35.381 12502.252 - 12552.665: 97.2821% ( 10) 00:08:35.381 12552.665 - 12603.077: 97.3482% ( 12) 00:08:35.381 12603.077 - 12653.489: 97.4142% ( 12) 00:08:35.381 12653.489 - 12703.902: 97.4802% ( 12) 00:08:35.381 12703.902 - 12754.314: 97.5407% ( 11) 00:08:35.381 12754.314 - 12804.726: 97.6067% ( 12) 00:08:35.381 12804.726 - 12855.138: 97.6673% ( 11) 00:08:35.381 12855.138 - 12905.551: 97.7278% ( 11) 00:08:35.381 12905.551 - 13006.375: 97.8928% ( 30) 00:08:35.381 13006.375 - 13107.200: 98.0359% ( 26) 00:08:35.381 13107.200 - 13208.025: 98.1679% ( 24) 00:08:35.381 13208.025 - 13308.849: 98.2779% ( 20) 00:08:35.381 13308.849 - 13409.674: 98.3495% ( 13) 00:08:35.381 13409.674 - 13510.498: 98.4320% ( 15) 00:08:35.381 13510.498 - 13611.323: 98.5200% ( 16) 00:08:35.381 13611.323 - 13712.148: 98.6081% ( 16) 00:08:35.381 13712.148 - 13812.972: 98.6851% ( 14) 00:08:35.381 13812.972 - 13913.797: 98.7731% ( 16) 00:08:35.381 13913.797 - 14014.622: 98.8556% ( 15) 00:08:35.381 14014.622 - 14115.446: 98.9327% ( 14) 00:08:35.381 14115.446 - 14216.271: 98.9932% ( 11) 00:08:35.381 14216.271 - 14317.095: 99.0592% ( 12) 00:08:35.381 14317.095 - 14417.920: 99.1197% ( 11) 00:08:35.381 14417.920 - 14518.745: 99.1747% ( 10) 00:08:35.381 14518.745 - 14619.569: 99.2353% ( 11) 00:08:35.381 14619.569 - 14720.394: 99.2628% ( 5) 00:08:35.381 14720.394 - 14821.218: 99.2848% ( 4) 00:08:35.381 14821.218 - 14922.043: 99.2958% ( 2) 00:08:35.381 22584.714 - 22685.538: 99.3123% ( 3) 00:08:35.381 22685.538 - 22786.363: 99.3343% ( 4) 00:08:35.381 22786.363 - 22887.188: 99.3563% ( 4) 00:08:35.381 22887.188 - 22988.012: 99.3783% ( 4) 00:08:35.381 22988.012 - 23088.837: 99.4003% ( 4) 00:08:35.381 23088.837 - 23189.662: 99.4278% ( 5) 00:08:35.381 23189.662 - 23290.486: 99.4498% ( 4) 00:08:35.381 23290.486 - 23391.311: 99.4718% ( 4) 00:08:35.381 23391.311 - 23492.135: 99.4938% ( 4) 00:08:35.381 23492.135 - 23592.960: 99.5213% ( 5) 00:08:35.381 23592.960 - 23693.785: 99.5434% ( 4) 00:08:35.381 23693.785 - 23794.609: 99.5599% ( 3) 00:08:35.381 23794.609 - 23895.434: 99.5819% ( 4) 00:08:35.381 23895.434 - 23996.258: 99.6094% ( 5) 00:08:35.381 23996.258 - 24097.083: 99.6314% ( 4) 00:08:35.381 24097.083 - 24197.908: 99.6534% ( 4) 00:08:35.381 24197.908 - 24298.732: 99.6754% ( 4) 00:08:35.381 24298.732 - 24399.557: 99.6974% ( 4) 00:08:35.381 24399.557 - 24500.382: 99.7194% ( 4) 00:08:35.381 24500.382 - 24601.206: 99.7414% ( 4) 00:08:35.381 24601.206 - 24702.031: 99.7634% ( 4) 00:08:35.381 24702.031 - 24802.855: 99.7854% ( 4) 00:08:35.381 24802.855 - 24903.680: 99.8074% ( 4) 00:08:35.381 24903.680 - 25004.505: 99.8294% ( 4) 00:08:35.381 25004.505 - 25105.329: 99.8570% ( 5) 00:08:35.381 25105.329 - 25206.154: 99.8790% ( 4) 00:08:35.381 25206.154 - 25306.978: 99.9010% ( 4) 00:08:35.381 25306.978 - 25407.803: 99.9230% ( 4) 00:08:35.381 25407.803 - 25508.628: 99.9505% ( 5) 00:08:35.381 25508.628 - 25609.452: 99.9725% ( 4) 00:08:35.381 25609.452 - 25710.277: 99.9945% ( 4) 00:08:35.381 25710.277 - 25811.102: 100.0000% ( 1) 00:08:35.381 00:08:35.381 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:35.381 ============================================================================== 00:08:35.381 Range in us Cumulative IO count 00:08:35.381 5192.468 - 5217.674: 0.0164% ( 3) 00:08:35.381 5217.674 - 5242.880: 0.1311% ( 21) 00:08:35.381 5242.880 - 5268.086: 0.3715% ( 44) 00:08:35.381 5268.086 - 5293.292: 0.7594% ( 71) 00:08:35.381 5293.292 - 5318.498: 1.1254% ( 67) 00:08:35.381 5318.498 - 5343.705: 1.5898% ( 85) 00:08:35.381 5343.705 - 5368.911: 2.1853% ( 109) 00:08:35.381 5368.911 - 5394.117: 2.6715% ( 89) 00:08:35.381 5394.117 - 5419.323: 3.1632% ( 90) 00:08:35.381 5419.323 - 5444.529: 3.7205% ( 102) 00:08:35.381 5444.529 - 5469.735: 4.2941% ( 105) 00:08:35.381 5469.735 - 5494.942: 4.8733% ( 106) 00:08:35.381 5494.942 - 5520.148: 5.5616% ( 126) 00:08:35.381 5520.148 - 5545.354: 6.2828% ( 132) 00:08:35.381 5545.354 - 5570.560: 7.0640% ( 143) 00:08:35.381 5570.560 - 5595.766: 7.8398% ( 142) 00:08:35.381 5595.766 - 5620.972: 8.6429% ( 147) 00:08:35.381 5620.972 - 5646.178: 9.5280% ( 162) 00:08:35.381 5646.178 - 5671.385: 10.3802% ( 156) 00:08:35.381 5671.385 - 5696.591: 11.2434% ( 158) 00:08:35.381 5696.591 - 5721.797: 12.1394% ( 164) 00:08:35.381 5721.797 - 5747.003: 13.0518% ( 167) 00:08:35.381 5747.003 - 5772.209: 14.0079% ( 175) 00:08:35.381 5772.209 - 5797.415: 14.9858% ( 179) 00:08:35.381 5797.415 - 5822.622: 15.9309% ( 173) 00:08:35.381 5822.622 - 5847.828: 16.8979% ( 177) 00:08:35.382 5847.828 - 5873.034: 17.9032% ( 184) 00:08:35.382 5873.034 - 5898.240: 18.9084% ( 184) 00:08:35.382 5898.240 - 5923.446: 20.0120% ( 202) 00:08:35.382 5923.446 - 5948.652: 21.1047% ( 200) 00:08:35.382 5948.652 - 5973.858: 22.1864% ( 198) 00:08:35.382 5973.858 - 5999.065: 23.2627% ( 197) 00:08:35.382 5999.065 - 6024.271: 24.3772% ( 204) 00:08:35.382 6024.271 - 6049.477: 25.5409% ( 213) 00:08:35.382 6049.477 - 6074.683: 26.7592% ( 223) 00:08:35.382 6074.683 - 6099.889: 27.9065% ( 210) 00:08:35.382 6099.889 - 6125.095: 29.0483% ( 209) 00:08:35.382 6125.095 - 6150.302: 30.1792% ( 207) 00:08:35.382 6150.302 - 6175.508: 31.3046% ( 206) 00:08:35.382 6175.508 - 6200.714: 32.4519% ( 210) 00:08:35.382 6200.714 - 6225.920: 33.6156% ( 213) 00:08:35.382 6225.920 - 6251.126: 34.7793% ( 213) 00:08:35.382 6251.126 - 6276.332: 35.9430% ( 213) 00:08:35.382 6276.332 - 6301.538: 37.0957% ( 211) 00:08:35.382 6301.538 - 6326.745: 38.2212% ( 206) 00:08:35.382 6326.745 - 6351.951: 39.3411% ( 205) 00:08:35.382 6351.951 - 6377.157: 40.5212% ( 216) 00:08:35.382 6377.157 - 6402.363: 41.6958% ( 215) 00:08:35.382 6402.363 - 6427.569: 42.9141% ( 223) 00:08:35.382 6427.569 - 6452.775: 44.1106% ( 219) 00:08:35.382 6452.775 - 6503.188: 46.4379% ( 426) 00:08:35.382 6503.188 - 6553.600: 48.8199% ( 436) 00:08:35.382 6553.600 - 6604.012: 51.2729% ( 449) 00:08:35.382 6604.012 - 6654.425: 53.6604% ( 437) 00:08:35.382 6654.425 - 6704.837: 56.0697% ( 441) 00:08:35.382 6704.837 - 6755.249: 58.4681% ( 439) 00:08:35.382 6755.249 - 6805.662: 60.9539% ( 455) 00:08:35.382 6805.662 - 6856.074: 63.4506% ( 457) 00:08:35.382 6856.074 - 6906.486: 65.7561% ( 422) 00:08:35.382 6906.486 - 6956.898: 67.9250% ( 397) 00:08:35.382 6956.898 - 7007.311: 69.8208% ( 347) 00:08:35.382 7007.311 - 7057.723: 71.4762% ( 303) 00:08:35.382 7057.723 - 7108.135: 73.0660% ( 291) 00:08:35.382 7108.135 - 7158.548: 74.4865% ( 260) 00:08:35.382 7158.548 - 7208.960: 75.7758% ( 236) 00:08:35.382 7208.960 - 7259.372: 76.9559% ( 216) 00:08:35.382 7259.372 - 7309.785: 78.0704% ( 204) 00:08:35.382 7309.785 - 7360.197: 79.1193% ( 192) 00:08:35.382 7360.197 - 7410.609: 80.1191% ( 183) 00:08:35.382 7410.609 - 7461.022: 81.0151% ( 164) 00:08:35.382 7461.022 - 7511.434: 81.8619% ( 155) 00:08:35.382 7511.434 - 7561.846: 82.5885% ( 133) 00:08:35.382 7561.846 - 7612.258: 83.2386% ( 119) 00:08:35.382 7612.258 - 7662.671: 83.7740% ( 98) 00:08:35.382 7662.671 - 7713.083: 84.2384% ( 85) 00:08:35.382 7713.083 - 7763.495: 84.7301% ( 90) 00:08:35.382 7763.495 - 7813.908: 85.2000% ( 86) 00:08:35.382 7813.908 - 7864.320: 85.6643% ( 85) 00:08:35.382 7864.320 - 7914.732: 86.1396% ( 87) 00:08:35.382 7914.732 - 7965.145: 86.5767% ( 80) 00:08:35.382 7965.145 - 8015.557: 86.9919% ( 76) 00:08:35.382 8015.557 - 8065.969: 87.4181% ( 78) 00:08:35.382 8065.969 - 8116.382: 87.8606% ( 81) 00:08:35.382 8116.382 - 8166.794: 88.2758% ( 76) 00:08:35.382 8166.794 - 8217.206: 88.6746% ( 73) 00:08:35.382 8217.206 - 8267.618: 89.0953% ( 77) 00:08:35.382 8267.618 - 8318.031: 89.4996% ( 74) 00:08:35.382 8318.031 - 8368.443: 89.9202% ( 77) 00:08:35.382 8368.443 - 8418.855: 90.3409% ( 77) 00:08:35.382 8418.855 - 8469.268: 90.7233% ( 70) 00:08:35.382 8469.268 - 8519.680: 91.1112% ( 71) 00:08:35.382 8519.680 - 8570.092: 91.4390% ( 60) 00:08:35.382 8570.092 - 8620.505: 91.7395% ( 55) 00:08:35.382 8620.505 - 8670.917: 91.9908% ( 46) 00:08:35.382 8670.917 - 8721.329: 92.1984% ( 38) 00:08:35.382 8721.329 - 8771.742: 92.3514% ( 28) 00:08:35.382 8771.742 - 8822.154: 92.4880% ( 25) 00:08:35.382 8822.154 - 8872.566: 92.6191% ( 24) 00:08:35.382 8872.566 - 8922.978: 92.7393% ( 22) 00:08:35.382 8922.978 - 8973.391: 92.8759% ( 25) 00:08:35.382 8973.391 - 9023.803: 93.0179% ( 26) 00:08:35.382 9023.803 - 9074.215: 93.1490% ( 24) 00:08:35.382 9074.215 - 9124.628: 93.2856% ( 25) 00:08:35.382 9124.628 - 9175.040: 93.4167% ( 24) 00:08:35.382 9175.040 - 9225.452: 93.5369% ( 22) 00:08:35.382 9225.452 - 9275.865: 93.6517% ( 21) 00:08:35.382 9275.865 - 9326.277: 93.7500% ( 18) 00:08:35.382 9326.277 - 9376.689: 93.8538% ( 19) 00:08:35.382 9376.689 - 9427.102: 93.9303% ( 14) 00:08:35.382 9427.102 - 9477.514: 94.0177% ( 16) 00:08:35.382 9477.514 - 9527.926: 94.1051% ( 16) 00:08:35.382 9527.926 - 9578.338: 94.1980% ( 17) 00:08:35.382 9578.338 - 9628.751: 94.2799% ( 15) 00:08:35.382 9628.751 - 9679.163: 94.3510% ( 13) 00:08:35.382 9679.163 - 9729.575: 94.4165% ( 12) 00:08:35.382 9729.575 - 9779.988: 94.4875% ( 13) 00:08:35.382 9779.988 - 9830.400: 94.5640% ( 14) 00:08:35.382 9830.400 - 9880.812: 94.6351% ( 13) 00:08:35.382 9880.812 - 9931.225: 94.6842% ( 9) 00:08:35.382 9931.225 - 9981.637: 94.7443% ( 11) 00:08:35.382 9981.637 - 10032.049: 94.8044% ( 11) 00:08:35.382 10032.049 - 10082.462: 94.8590% ( 10) 00:08:35.382 10082.462 - 10132.874: 94.8973% ( 7) 00:08:35.382 10132.874 - 10183.286: 94.9410% ( 8) 00:08:35.382 10183.286 - 10233.698: 94.9792% ( 7) 00:08:35.382 10233.698 - 10284.111: 95.0175% ( 7) 00:08:35.382 10284.111 - 10334.523: 95.0612% ( 8) 00:08:35.382 10334.523 - 10384.935: 95.1049% ( 8) 00:08:35.382 10384.935 - 10435.348: 95.1431% ( 7) 00:08:35.382 10435.348 - 10485.760: 95.1814% ( 7) 00:08:35.382 10485.760 - 10536.172: 95.2251% ( 8) 00:08:35.382 10536.172 - 10586.585: 95.2688% ( 8) 00:08:35.382 10586.585 - 10636.997: 95.2906% ( 4) 00:08:35.382 10636.997 - 10687.409: 95.2961% ( 1) 00:08:35.382 10687.409 - 10737.822: 95.3070% ( 2) 00:08:35.382 10737.822 - 10788.234: 95.3180% ( 2) 00:08:35.382 10788.234 - 10838.646: 95.3289% ( 2) 00:08:35.382 10838.646 - 10889.058: 95.3398% ( 2) 00:08:35.382 10889.058 - 10939.471: 95.3507% ( 2) 00:08:35.382 10939.471 - 10989.883: 95.3617% ( 2) 00:08:35.382 10989.883 - 11040.295: 95.3835% ( 4) 00:08:35.382 11040.295 - 11090.708: 95.4054% ( 4) 00:08:35.382 11090.708 - 11141.120: 95.4491% ( 8) 00:08:35.382 11141.120 - 11191.532: 95.5092% ( 11) 00:08:35.382 11191.532 - 11241.945: 95.5583% ( 9) 00:08:35.382 11241.945 - 11292.357: 95.6130% ( 10) 00:08:35.382 11292.357 - 11342.769: 95.6676% ( 10) 00:08:35.382 11342.769 - 11393.182: 95.7222% ( 10) 00:08:35.382 11393.182 - 11443.594: 95.7714% ( 9) 00:08:35.382 11443.594 - 11494.006: 95.8206% ( 9) 00:08:35.382 11494.006 - 11544.418: 95.8752% ( 10) 00:08:35.382 11544.418 - 11594.831: 95.9244% ( 9) 00:08:35.382 11594.831 - 11645.243: 95.9790% ( 10) 00:08:35.382 11645.243 - 11695.655: 96.0500% ( 13) 00:08:35.382 11695.655 - 11746.068: 96.1156% ( 12) 00:08:35.382 11746.068 - 11796.480: 96.1757% ( 11) 00:08:35.382 11796.480 - 11846.892: 96.2413% ( 12) 00:08:35.382 11846.892 - 11897.305: 96.3068% ( 12) 00:08:35.382 11897.305 - 11947.717: 96.3669% ( 11) 00:08:35.382 11947.717 - 11998.129: 96.4598% ( 17) 00:08:35.382 11998.129 - 12048.542: 96.5472% ( 16) 00:08:35.382 12048.542 - 12098.954: 96.6292% ( 15) 00:08:35.382 12098.954 - 12149.366: 96.7166% ( 16) 00:08:35.382 12149.366 - 12199.778: 96.8040% ( 16) 00:08:35.382 12199.778 - 12250.191: 96.8859% ( 15) 00:08:35.382 12250.191 - 12300.603: 96.9733% ( 16) 00:08:35.382 12300.603 - 12351.015: 97.0608% ( 16) 00:08:35.382 12351.015 - 12401.428: 97.1482% ( 16) 00:08:35.382 12401.428 - 12451.840: 97.2356% ( 16) 00:08:35.382 12451.840 - 12502.252: 97.3121% ( 14) 00:08:35.382 12502.252 - 12552.665: 97.4049% ( 17) 00:08:35.382 12552.665 - 12603.077: 97.5142% ( 20) 00:08:35.382 12603.077 - 12653.489: 97.6180% ( 19) 00:08:35.382 12653.489 - 12703.902: 97.7054% ( 16) 00:08:35.382 12703.902 - 12754.314: 97.7819% ( 14) 00:08:35.382 12754.314 - 12804.726: 97.8311% ( 9) 00:08:35.382 12804.726 - 12855.138: 97.8912% ( 11) 00:08:35.382 12855.138 - 12905.551: 97.9403% ( 9) 00:08:35.382 12905.551 - 13006.375: 98.0387% ( 18) 00:08:35.382 13006.375 - 13107.200: 98.1316% ( 17) 00:08:35.382 13107.200 - 13208.025: 98.2135% ( 15) 00:08:35.382 13208.025 - 13308.849: 98.2955% ( 15) 00:08:35.382 13308.849 - 13409.674: 98.3774% ( 15) 00:08:35.382 13409.674 - 13510.498: 98.4594% ( 15) 00:08:35.382 13510.498 - 13611.323: 98.5358% ( 14) 00:08:35.382 13611.323 - 13712.148: 98.6123% ( 14) 00:08:35.382 13712.148 - 13812.972: 98.6724% ( 11) 00:08:35.382 13812.972 - 13913.797: 98.7161% ( 8) 00:08:35.382 13913.797 - 14014.622: 98.7598% ( 8) 00:08:35.382 14014.622 - 14115.446: 98.8035% ( 8) 00:08:35.382 14115.446 - 14216.271: 98.8472% ( 8) 00:08:35.382 14216.271 - 14317.095: 98.8855% ( 7) 00:08:35.382 14317.095 - 14417.920: 98.9128% ( 5) 00:08:35.382 14417.920 - 14518.745: 98.9620% ( 9) 00:08:35.382 14518.745 - 14619.569: 99.0221% ( 11) 00:08:35.382 14619.569 - 14720.394: 99.0767% ( 10) 00:08:35.382 14720.394 - 14821.218: 99.1313% ( 10) 00:08:35.382 14821.218 - 14922.043: 99.1805% ( 9) 00:08:35.382 14922.043 - 15022.868: 99.2188% ( 7) 00:08:35.382 15022.868 - 15123.692: 99.2734% ( 10) 00:08:35.382 15123.692 - 15224.517: 99.3389% ( 12) 00:08:35.382 15224.517 - 15325.342: 99.4045% ( 12) 00:08:35.382 15325.342 - 15426.166: 99.4701% ( 12) 00:08:35.382 15426.166 - 15526.991: 99.5192% ( 9) 00:08:35.382 15526.991 - 15627.815: 99.5411% ( 4) 00:08:35.382 15627.815 - 15728.640: 99.5684% ( 5) 00:08:35.382 15728.640 - 15829.465: 99.5903% ( 4) 00:08:35.382 15829.465 - 15930.289: 99.6121% ( 4) 00:08:35.382 15930.289 - 16031.114: 99.6340% ( 4) 00:08:35.382 16031.114 - 16131.938: 99.6613% ( 5) 00:08:35.382 16131.938 - 16232.763: 99.6831% ( 4) 00:08:35.382 16232.763 - 16333.588: 99.7050% ( 4) 00:08:35.382 16333.588 - 16434.412: 99.7268% ( 4) 00:08:35.382 16434.412 - 16535.237: 99.7487% ( 4) 00:08:35.382 16535.237 - 16636.062: 99.7760% ( 5) 00:08:35.382 16636.062 - 16736.886: 99.7979% ( 4) 00:08:35.382 16736.886 - 16837.711: 99.8197% ( 4) 00:08:35.383 16837.711 - 16938.535: 99.8416% ( 4) 00:08:35.383 16938.535 - 17039.360: 99.8634% ( 4) 00:08:35.383 17039.360 - 17140.185: 99.8907% ( 5) 00:08:35.383 17140.185 - 17241.009: 99.9126% ( 4) 00:08:35.383 17241.009 - 17341.834: 99.9344% ( 4) 00:08:35.383 17341.834 - 17442.658: 99.9563% ( 4) 00:08:35.383 17442.658 - 17543.483: 99.9836% ( 5) 00:08:35.383 17543.483 - 17644.308: 100.0000% ( 3) 00:08:35.383 00:08:35.383 14:09:36 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:36.320 Initializing NVMe Controllers 00:08:36.320 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:08:36.320 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:08:36.320 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:08:36.320 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:08:36.320 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:08:36.320 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:08:36.320 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:08:36.320 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:08:36.320 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:08:36.320 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:08:36.320 Initialization complete. Launching workers. 00:08:36.320 ======================================================== 00:08:36.320 Latency(us) 00:08:36.320 Device Information : IOPS MiB/s Average min max 00:08:36.321 PCIE (0000:00:09.0) NSID 1 from core 0: 17110.42 200.51 7477.96 5008.74 25199.24 00:08:36.321 PCIE (0000:00:06.0) NSID 1 from core 0: 17110.42 200.51 7471.93 4967.24 25076.36 00:08:36.321 PCIE (0000:00:07.0) NSID 1 from core 0: 17110.42 200.51 7465.19 5238.57 23485.78 00:08:36.321 PCIE (0000:00:08.0) NSID 1 from core 0: 17110.42 200.51 7459.21 5240.77 22072.31 00:08:36.321 PCIE (0000:00:08.0) NSID 2 from core 0: 17110.42 200.51 7453.10 5237.05 20833.39 00:08:36.321 PCIE (0000:00:08.0) NSID 3 from core 0: 17238.11 202.01 7391.67 5296.63 13832.83 00:08:36.321 ======================================================== 00:08:36.321 Total : 102790.22 1204.57 7453.10 4967.24 25199.24 00:08:36.321 00:08:36.321 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:36.321 ================================================================================= 00:08:36.321 1.00000% : 5620.972us 00:08:36.321 10.00000% : 6099.889us 00:08:36.321 25.00000% : 6377.157us 00:08:36.321 50.00000% : 7511.434us 00:08:36.321 75.00000% : 8116.382us 00:08:36.321 90.00000% : 8620.505us 00:08:36.321 95.00000% : 8973.391us 00:08:36.321 98.00000% : 10889.058us 00:08:36.321 99.00000% : 12149.366us 00:08:36.321 99.50000% : 24399.557us 00:08:36.321 99.90000% : 25105.329us 00:08:36.321 99.99000% : 25206.154us 00:08:36.321 99.99900% : 25206.154us 00:08:36.321 99.99990% : 25206.154us 00:08:36.321 99.99999% : 25206.154us 00:08:36.321 00:08:36.321 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:36.321 ================================================================================= 00:08:36.321 1.00000% : 5469.735us 00:08:36.321 10.00000% : 5948.652us 00:08:36.321 25.00000% : 6503.188us 00:08:36.321 50.00000% : 7259.372us 00:08:36.321 75.00000% : 8217.206us 00:08:36.321 90.00000% : 8822.154us 00:08:36.321 95.00000% : 9225.452us 00:08:36.321 98.00000% : 10586.585us 00:08:36.321 99.00000% : 12351.015us 00:08:36.321 99.50000% : 22786.363us 00:08:36.321 99.90000% : 24702.031us 00:08:36.321 99.99000% : 25105.329us 00:08:36.321 99.99900% : 25105.329us 00:08:36.321 99.99990% : 25105.329us 00:08:36.321 99.99999% : 25105.329us 00:08:36.321 00:08:36.321 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:36.321 ================================================================================= 00:08:36.321 1.00000% : 5747.003us 00:08:36.321 10.00000% : 6125.095us 00:08:36.321 25.00000% : 6377.157us 00:08:36.321 50.00000% : 7511.434us 00:08:36.321 75.00000% : 8116.382us 00:08:36.321 90.00000% : 8620.505us 00:08:36.321 95.00000% : 9023.803us 00:08:36.321 98.00000% : 10536.172us 00:08:36.321 99.00000% : 11544.418us 00:08:36.321 99.50000% : 21475.643us 00:08:36.321 99.90000% : 23088.837us 00:08:36.321 99.99000% : 23492.135us 00:08:36.321 99.99900% : 23492.135us 00:08:36.321 99.99990% : 23492.135us 00:08:36.321 99.99999% : 23492.135us 00:08:36.321 00:08:36.321 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:36.321 ================================================================================= 00:08:36.321 1.00000% : 5721.797us 00:08:36.321 10.00000% : 6125.095us 00:08:36.321 25.00000% : 6427.569us 00:08:36.321 50.00000% : 7461.022us 00:08:36.321 75.00000% : 8065.969us 00:08:36.321 90.00000% : 8620.505us 00:08:36.321 95.00000% : 9023.803us 00:08:36.321 98.00000% : 10334.523us 00:08:36.321 99.00000% : 12502.252us 00:08:36.321 99.50000% : 20064.098us 00:08:36.321 99.90000% : 21677.292us 00:08:36.321 99.99000% : 22080.591us 00:08:36.321 99.99900% : 22080.591us 00:08:36.321 99.99990% : 22080.591us 00:08:36.321 99.99999% : 22080.591us 00:08:36.321 00:08:36.321 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:36.321 ================================================================================= 00:08:36.321 1.00000% : 5747.003us 00:08:36.321 10.00000% : 6125.095us 00:08:36.321 25.00000% : 6427.569us 00:08:36.321 50.00000% : 7461.022us 00:08:36.321 75.00000% : 8065.969us 00:08:36.321 90.00000% : 8620.505us 00:08:36.321 95.00000% : 8973.391us 00:08:36.321 98.00000% : 10334.523us 00:08:36.321 99.00000% : 12905.551us 00:08:36.321 99.50000% : 18854.203us 00:08:36.321 99.90000% : 20467.397us 00:08:36.321 99.99000% : 20870.695us 00:08:36.321 99.99900% : 20870.695us 00:08:36.321 99.99990% : 20870.695us 00:08:36.321 99.99999% : 20870.695us 00:08:36.321 00:08:36.321 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:36.321 ================================================================================= 00:08:36.321 1.00000% : 5671.385us 00:08:36.321 10.00000% : 6125.095us 00:08:36.321 25.00000% : 6377.157us 00:08:36.321 50.00000% : 7461.022us 00:08:36.321 75.00000% : 8116.382us 00:08:36.321 90.00000% : 8620.505us 00:08:36.321 95.00000% : 8973.391us 00:08:36.321 98.00000% : 10485.760us 00:08:36.321 99.00000% : 11796.480us 00:08:36.321 99.50000% : 12603.077us 00:08:36.321 99.90000% : 13510.498us 00:08:36.321 99.99000% : 13812.972us 00:08:36.321 99.99900% : 13913.797us 00:08:36.321 99.99990% : 13913.797us 00:08:36.321 99.99999% : 13913.797us 00:08:36.321 00:08:36.321 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:08:36.321 ============================================================================== 00:08:36.321 Range in us Cumulative IO count 00:08:36.321 4990.818 - 5016.025: 0.0058% ( 1) 00:08:36.321 5242.880 - 5268.086: 0.0175% ( 2) 00:08:36.321 5268.086 - 5293.292: 0.0466% ( 5) 00:08:36.321 5293.292 - 5318.498: 0.0933% ( 8) 00:08:36.321 5318.498 - 5343.705: 0.1108% ( 3) 00:08:36.321 5343.705 - 5368.911: 0.1399% ( 5) 00:08:36.321 5368.911 - 5394.117: 0.1866% ( 8) 00:08:36.321 5394.117 - 5419.323: 0.2274% ( 7) 00:08:36.321 5419.323 - 5444.529: 0.2915% ( 11) 00:08:36.321 5444.529 - 5469.735: 0.3498% ( 10) 00:08:36.321 5469.735 - 5494.942: 0.3965% ( 8) 00:08:36.321 5494.942 - 5520.148: 0.4489% ( 9) 00:08:36.321 5520.148 - 5545.354: 0.5189% ( 12) 00:08:36.321 5545.354 - 5570.560: 0.7113% ( 33) 00:08:36.321 5570.560 - 5595.766: 0.9153% ( 35) 00:08:36.321 5595.766 - 5620.972: 1.0319% ( 20) 00:08:36.321 5620.972 - 5646.178: 1.1952% ( 28) 00:08:36.321 5646.178 - 5671.385: 1.6558% ( 79) 00:08:36.321 5671.385 - 5696.591: 1.8657% ( 36) 00:08:36.321 5696.591 - 5721.797: 2.0697% ( 35) 00:08:36.321 5721.797 - 5747.003: 2.3263% ( 44) 00:08:36.321 5747.003 - 5772.209: 2.6353% ( 53) 00:08:36.321 5772.209 - 5797.415: 3.0084% ( 64) 00:08:36.321 5797.415 - 5822.622: 3.5156% ( 87) 00:08:36.321 5822.622 - 5847.828: 4.0229% ( 87) 00:08:36.321 5847.828 - 5873.034: 4.5359% ( 88) 00:08:36.321 5873.034 - 5898.240: 5.0956% ( 96) 00:08:36.321 5898.240 - 5923.446: 5.7603% ( 114) 00:08:36.321 5923.446 - 5948.652: 6.3783% ( 106) 00:08:36.321 5948.652 - 5973.858: 6.9321% ( 95) 00:08:36.321 5973.858 - 5999.065: 7.5910% ( 113) 00:08:36.321 5999.065 - 6024.271: 8.3256% ( 126) 00:08:36.321 6024.271 - 6049.477: 9.1593% ( 143) 00:08:36.321 6049.477 - 6074.683: 9.9289% ( 132) 00:08:36.321 6074.683 - 6099.889: 10.9667% ( 178) 00:08:36.321 6099.889 - 6125.095: 12.3018% ( 229) 00:08:36.321 6125.095 - 6150.302: 13.5844% ( 220) 00:08:36.321 6150.302 - 6175.508: 14.9604% ( 236) 00:08:36.321 6175.508 - 6200.714: 16.8435% ( 323) 00:08:36.321 6200.714 - 6225.920: 18.1670% ( 227) 00:08:36.321 6225.920 - 6251.126: 19.6129% ( 248) 00:08:36.321 6251.126 - 6276.332: 20.8547% ( 213) 00:08:36.321 6276.332 - 6301.538: 22.3356% ( 254) 00:08:36.321 6301.538 - 6326.745: 23.2218% ( 152) 00:08:36.321 6326.745 - 6351.951: 24.2712% ( 180) 00:08:36.321 6351.951 - 6377.157: 25.3265% ( 181) 00:08:36.321 6377.157 - 6402.363: 26.3818% ( 181) 00:08:36.321 6402.363 - 6427.569: 27.6877% ( 224) 00:08:36.321 6427.569 - 6452.775: 28.5098% ( 141) 00:08:36.321 6452.775 - 6503.188: 30.0140% ( 258) 00:08:36.321 6503.188 - 6553.600: 31.0634% ( 180) 00:08:36.321 6553.600 - 6604.012: 32.1653% ( 189) 00:08:36.321 6604.012 - 6654.425: 33.5879% ( 244) 00:08:36.321 6654.425 - 6704.837: 35.1038% ( 260) 00:08:36.321 6704.837 - 6755.249: 36.3864% ( 220) 00:08:36.321 6755.249 - 6805.662: 37.4009% ( 174) 00:08:36.321 6805.662 - 6856.074: 38.2404% ( 144) 00:08:36.321 6856.074 - 6906.486: 38.9750% ( 126) 00:08:36.321 6906.486 - 6956.898: 39.6047% ( 108) 00:08:36.321 6956.898 - 7007.311: 40.1528% ( 94) 00:08:36.321 7007.311 - 7057.723: 40.7125% ( 96) 00:08:36.321 7057.723 - 7108.135: 41.3888% ( 116) 00:08:36.321 7108.135 - 7158.548: 42.2225% ( 143) 00:08:36.321 7158.548 - 7208.960: 43.1145% ( 153) 00:08:36.321 7208.960 - 7259.372: 44.0707% ( 164) 00:08:36.321 7259.372 - 7309.785: 45.2600% ( 204) 00:08:36.321 7309.785 - 7360.197: 46.7118% ( 249) 00:08:36.321 7360.197 - 7410.609: 48.2509% ( 264) 00:08:36.321 7410.609 - 7461.022: 49.9184% ( 286) 00:08:36.321 7461.022 - 7511.434: 51.7083% ( 307) 00:08:36.321 7511.434 - 7561.846: 53.6964% ( 341) 00:08:36.321 7561.846 - 7612.258: 55.5679% ( 321) 00:08:36.321 7612.258 - 7662.671: 57.7134% ( 368) 00:08:36.321 7662.671 - 7713.083: 59.8881% ( 373) 00:08:36.321 7713.083 - 7763.495: 61.9520% ( 354) 00:08:36.321 7763.495 - 7813.908: 64.1091% ( 370) 00:08:36.321 7813.908 - 7864.320: 66.2547% ( 368) 00:08:36.321 7864.320 - 7914.732: 68.3944% ( 367) 00:08:36.322 7914.732 - 7965.145: 70.2484% ( 318) 00:08:36.322 7965.145 - 8015.557: 72.2540% ( 344) 00:08:36.322 8015.557 - 8065.969: 74.0555% ( 309) 00:08:36.322 8065.969 - 8116.382: 76.0669% ( 345) 00:08:36.322 8116.382 - 8166.794: 77.7868% ( 295) 00:08:36.322 8166.794 - 8217.206: 79.5184% ( 297) 00:08:36.322 8217.206 - 8267.618: 81.1042% ( 272) 00:08:36.322 8267.618 - 8318.031: 82.7134% ( 276) 00:08:36.322 8318.031 - 8368.443: 84.1826% ( 252) 00:08:36.322 8368.443 - 8418.855: 85.6343% ( 249) 00:08:36.322 8418.855 - 8469.268: 86.9578% ( 227) 00:08:36.322 8469.268 - 8519.680: 88.2521% ( 222) 00:08:36.322 8519.680 - 8570.092: 89.5114% ( 216) 00:08:36.322 8570.092 - 8620.505: 90.5259% ( 174) 00:08:36.322 8620.505 - 8670.917: 91.4646% ( 161) 00:08:36.322 8670.917 - 8721.329: 92.2283% ( 131) 00:08:36.322 8721.329 - 8771.742: 93.1028% ( 150) 00:08:36.322 8771.742 - 8822.154: 93.8433% ( 127) 00:08:36.322 8822.154 - 8872.566: 94.4263% ( 100) 00:08:36.322 8872.566 - 8922.978: 94.9510% ( 90) 00:08:36.322 8922.978 - 8973.391: 95.3008% ( 60) 00:08:36.322 8973.391 - 9023.803: 95.6507% ( 60) 00:08:36.322 9023.803 - 9074.215: 95.9363% ( 49) 00:08:36.322 9074.215 - 9124.628: 96.1870% ( 43) 00:08:36.322 9124.628 - 9175.040: 96.3969% ( 36) 00:08:36.322 9175.040 - 9225.452: 96.6010% ( 35) 00:08:36.322 9225.452 - 9275.865: 96.7642% ( 28) 00:08:36.322 9275.865 - 9326.277: 96.9275% ( 28) 00:08:36.322 9326.277 - 9376.689: 97.0849% ( 27) 00:08:36.322 9376.689 - 9427.102: 97.2365% ( 26) 00:08:36.322 9427.102 - 9477.514: 97.3822% ( 25) 00:08:36.322 9477.514 - 9527.926: 97.4930% ( 19) 00:08:36.322 9527.926 - 9578.338: 97.5921% ( 17) 00:08:36.322 9578.338 - 9628.751: 97.6679% ( 13) 00:08:36.322 9628.751 - 9679.163: 97.7087% ( 7) 00:08:36.322 9679.163 - 9729.575: 97.7204% ( 2) 00:08:36.322 9729.575 - 9779.988: 97.7379% ( 3) 00:08:36.322 9779.988 - 9830.400: 97.7554% ( 3) 00:08:36.322 9830.400 - 9880.812: 97.7612% ( 1) 00:08:36.322 10485.760 - 10536.172: 97.7845% ( 4) 00:08:36.322 10536.172 - 10586.585: 97.8020% ( 3) 00:08:36.322 10586.585 - 10636.997: 97.8253% ( 4) 00:08:36.322 10636.997 - 10687.409: 97.8545% ( 5) 00:08:36.322 10687.409 - 10737.822: 97.9011% ( 8) 00:08:36.322 10737.822 - 10788.234: 97.9361% ( 6) 00:08:36.322 10788.234 - 10838.646: 97.9769% ( 7) 00:08:36.322 10838.646 - 10889.058: 98.0119% ( 6) 00:08:36.322 10889.058 - 10939.471: 98.0527% ( 7) 00:08:36.322 10939.471 - 10989.883: 98.0935% ( 7) 00:08:36.322 10989.883 - 11040.295: 98.1343% ( 7) 00:08:36.322 11040.295 - 11090.708: 98.1751% ( 7) 00:08:36.322 11090.708 - 11141.120: 98.2160% ( 7) 00:08:36.322 11141.120 - 11191.532: 98.2509% ( 6) 00:08:36.322 11191.532 - 11241.945: 98.2917% ( 7) 00:08:36.322 11241.945 - 11292.357: 98.3267% ( 6) 00:08:36.322 11292.357 - 11342.769: 98.3734% ( 8) 00:08:36.322 11342.769 - 11393.182: 98.4142% ( 7) 00:08:36.322 11393.182 - 11443.594: 98.4492% ( 6) 00:08:36.322 11443.594 - 11494.006: 98.4958% ( 8) 00:08:36.322 11494.006 - 11544.418: 98.5366% ( 7) 00:08:36.322 11544.418 - 11594.831: 98.5833% ( 8) 00:08:36.322 11594.831 - 11645.243: 98.6182% ( 6) 00:08:36.322 11645.243 - 11695.655: 98.6649% ( 8) 00:08:36.322 11695.655 - 11746.068: 98.7057% ( 7) 00:08:36.322 11746.068 - 11796.480: 98.7465% ( 7) 00:08:36.322 11796.480 - 11846.892: 98.7815% ( 6) 00:08:36.322 11846.892 - 11897.305: 98.8281% ( 8) 00:08:36.322 11897.305 - 11947.717: 98.8631% ( 6) 00:08:36.322 11947.717 - 11998.129: 98.9039% ( 7) 00:08:36.322 11998.129 - 12048.542: 98.9447% ( 7) 00:08:36.322 12048.542 - 12098.954: 98.9855% ( 7) 00:08:36.322 12098.954 - 12149.366: 99.0264% ( 7) 00:08:36.322 12149.366 - 12199.778: 99.0613% ( 6) 00:08:36.322 12199.778 - 12250.191: 99.1080% ( 8) 00:08:36.322 12250.191 - 12300.603: 99.1488% ( 7) 00:08:36.322 12300.603 - 12351.015: 99.1896% ( 7) 00:08:36.322 12351.015 - 12401.428: 99.2362% ( 8) 00:08:36.322 12401.428 - 12451.840: 99.2537% ( 3) 00:08:36.322 23895.434 - 23996.258: 99.2771% ( 4) 00:08:36.322 23996.258 - 24097.083: 99.3354% ( 10) 00:08:36.322 24097.083 - 24197.908: 99.4053% ( 12) 00:08:36.322 24197.908 - 24298.732: 99.4928% ( 15) 00:08:36.322 24298.732 - 24399.557: 99.5219% ( 5) 00:08:36.322 24399.557 - 24500.382: 99.5452% ( 4) 00:08:36.322 24500.382 - 24601.206: 99.5802% ( 6) 00:08:36.322 24601.206 - 24702.031: 99.6618% ( 14) 00:08:36.322 24702.031 - 24802.855: 99.7493% ( 15) 00:08:36.322 24802.855 - 24903.680: 99.8193% ( 12) 00:08:36.322 24903.680 - 25004.505: 99.8892% ( 12) 00:08:36.322 25004.505 - 25105.329: 99.9534% ( 11) 00:08:36.322 25105.329 - 25206.154: 100.0000% ( 8) 00:08:36.322 00:08:36.322 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:08:36.322 ============================================================================== 00:08:36.322 Range in us Cumulative IO count 00:08:36.322 4965.612 - 4990.818: 0.0058% ( 1) 00:08:36.322 4990.818 - 5016.025: 0.0233% ( 3) 00:08:36.322 5016.025 - 5041.231: 0.0292% ( 1) 00:08:36.322 5041.231 - 5066.437: 0.0350% ( 1) 00:08:36.322 5066.437 - 5091.643: 0.0525% ( 3) 00:08:36.322 5091.643 - 5116.849: 0.0758% ( 4) 00:08:36.322 5116.849 - 5142.055: 0.1224% ( 8) 00:08:36.322 5142.055 - 5167.262: 0.1632% ( 7) 00:08:36.322 5167.262 - 5192.468: 0.2041% ( 7) 00:08:36.322 5192.468 - 5217.674: 0.2449% ( 7) 00:08:36.322 5217.674 - 5242.880: 0.2857% ( 7) 00:08:36.322 5242.880 - 5268.086: 0.3265% ( 7) 00:08:36.322 5268.086 - 5293.292: 0.3848% ( 10) 00:08:36.322 5293.292 - 5318.498: 0.4548% ( 12) 00:08:36.322 5318.498 - 5343.705: 0.5306% ( 13) 00:08:36.322 5343.705 - 5368.911: 0.6297% ( 17) 00:08:36.322 5368.911 - 5394.117: 0.7346% ( 18) 00:08:36.322 5394.117 - 5419.323: 0.8570% ( 21) 00:08:36.322 5419.323 - 5444.529: 0.9853% ( 22) 00:08:36.322 5444.529 - 5469.735: 1.1952% ( 36) 00:08:36.322 5469.735 - 5494.942: 1.3468% ( 26) 00:08:36.322 5494.942 - 5520.148: 1.5100% ( 28) 00:08:36.322 5520.148 - 5545.354: 1.7549% ( 42) 00:08:36.322 5545.354 - 5570.560: 2.0406% ( 49) 00:08:36.322 5570.560 - 5595.766: 2.4953% ( 78) 00:08:36.322 5595.766 - 5620.972: 2.9093% ( 71) 00:08:36.322 5620.972 - 5646.178: 3.3291% ( 72) 00:08:36.322 5646.178 - 5671.385: 3.8363% ( 87) 00:08:36.322 5671.385 - 5696.591: 4.4426% ( 104) 00:08:36.322 5696.591 - 5721.797: 5.0431% ( 103) 00:08:36.322 5721.797 - 5747.003: 5.6437% ( 103) 00:08:36.322 5747.003 - 5772.209: 6.2208% ( 99) 00:08:36.322 5772.209 - 5797.415: 6.9496% ( 125) 00:08:36.322 5797.415 - 5822.622: 7.4860% ( 92) 00:08:36.322 5822.622 - 5847.828: 8.0807% ( 102) 00:08:36.322 5847.828 - 5873.034: 8.6171% ( 92) 00:08:36.322 5873.034 - 5898.240: 9.1826% ( 97) 00:08:36.322 5898.240 - 5923.446: 9.7656% ( 100) 00:08:36.322 5923.446 - 5948.652: 10.4186% ( 112) 00:08:36.322 5948.652 - 5973.858: 11.1474% ( 125) 00:08:36.322 5973.858 - 5999.065: 12.0394% ( 153) 00:08:36.322 5999.065 - 6024.271: 12.9256% ( 152) 00:08:36.322 6024.271 - 6049.477: 13.8351% ( 156) 00:08:36.322 6049.477 - 6074.683: 14.6280% ( 136) 00:08:36.322 6074.683 - 6099.889: 15.4268% ( 137) 00:08:36.322 6099.889 - 6125.095: 16.3013% ( 150) 00:08:36.322 6125.095 - 6150.302: 17.1525% ( 146) 00:08:36.322 6150.302 - 6175.508: 17.8696% ( 123) 00:08:36.322 6175.508 - 6200.714: 18.5226% ( 112) 00:08:36.322 6200.714 - 6225.920: 19.2980% ( 133) 00:08:36.322 6225.920 - 6251.126: 19.8694% ( 98) 00:08:36.322 6251.126 - 6276.332: 20.4641% ( 102) 00:08:36.322 6276.332 - 6301.538: 21.1346% ( 115) 00:08:36.322 6301.538 - 6326.745: 21.8225% ( 118) 00:08:36.322 6326.745 - 6351.951: 22.3939% ( 98) 00:08:36.322 6351.951 - 6377.157: 22.9244% ( 91) 00:08:36.322 6377.157 - 6402.363: 23.6532% ( 125) 00:08:36.322 6402.363 - 6427.569: 24.2596% ( 104) 00:08:36.322 6427.569 - 6452.775: 24.9359% ( 116) 00:08:36.322 6452.775 - 6503.188: 26.4401% ( 258) 00:08:36.322 6503.188 - 6553.600: 27.5245% ( 186) 00:08:36.322 6553.600 - 6604.012: 28.6964% ( 201) 00:08:36.322 6604.012 - 6654.425: 29.9907% ( 222) 00:08:36.322 6654.425 - 6704.837: 31.2442% ( 215) 00:08:36.322 6704.837 - 6755.249: 32.7134% ( 252) 00:08:36.322 6755.249 - 6805.662: 34.2118% ( 257) 00:08:36.322 6805.662 - 6856.074: 35.6518% ( 247) 00:08:36.322 6856.074 - 6906.486: 37.1677% ( 260) 00:08:36.322 6906.486 - 6956.898: 38.7535% ( 272) 00:08:36.322 6956.898 - 7007.311: 40.7008% ( 334) 00:08:36.322 7007.311 - 7057.723: 42.7122% ( 345) 00:08:36.322 7057.723 - 7108.135: 44.7878% ( 356) 00:08:36.322 7108.135 - 7158.548: 46.8925% ( 361) 00:08:36.322 7158.548 - 7208.960: 48.7174% ( 313) 00:08:36.322 7208.960 - 7259.372: 50.6588% ( 333) 00:08:36.322 7259.372 - 7309.785: 52.2330% ( 270) 00:08:36.322 7309.785 - 7360.197: 53.8654% ( 280) 00:08:36.322 7360.197 - 7410.609: 55.3813% ( 260) 00:08:36.322 7410.609 - 7461.022: 56.7164% ( 229) 00:08:36.322 7461.022 - 7511.434: 57.9000% ( 203) 00:08:36.322 7511.434 - 7561.846: 59.0543% ( 198) 00:08:36.322 7561.846 - 7612.258: 60.1971% ( 196) 00:08:36.322 7612.258 - 7662.671: 61.5672% ( 235) 00:08:36.322 7662.671 - 7713.083: 62.6749% ( 190) 00:08:36.322 7713.083 - 7763.495: 63.8410% ( 200) 00:08:36.322 7763.495 - 7813.908: 65.1178% ( 219) 00:08:36.322 7813.908 - 7864.320: 66.4179% ( 223) 00:08:36.322 7864.320 - 7914.732: 67.6597% ( 213) 00:08:36.322 7914.732 - 7965.145: 69.0648% ( 241) 00:08:36.322 7965.145 - 8015.557: 70.3475% ( 220) 00:08:36.322 8015.557 - 8065.969: 71.6943% ( 231) 00:08:36.322 8065.969 - 8116.382: 72.9653% ( 218) 00:08:36.323 8116.382 - 8166.794: 74.3587% ( 239) 00:08:36.323 8166.794 - 8217.206: 75.6880% ( 228) 00:08:36.323 8217.206 - 8267.618: 77.1164% ( 245) 00:08:36.323 8267.618 - 8318.031: 78.3465% ( 211) 00:08:36.323 8318.031 - 8368.443: 79.6467% ( 223) 00:08:36.323 8368.443 - 8418.855: 80.9177% ( 218) 00:08:36.323 8418.855 - 8469.268: 82.2528% ( 229) 00:08:36.323 8469.268 - 8519.680: 83.5646% ( 225) 00:08:36.323 8519.680 - 8570.092: 84.8589% ( 222) 00:08:36.323 8570.092 - 8620.505: 86.0250% ( 200) 00:08:36.323 8620.505 - 8670.917: 87.2260% ( 206) 00:08:36.323 8670.917 - 8721.329: 88.3512% ( 193) 00:08:36.323 8721.329 - 8771.742: 89.4706% ( 192) 00:08:36.323 8771.742 - 8822.154: 90.4093% ( 161) 00:08:36.323 8822.154 - 8872.566: 91.3596% ( 163) 00:08:36.323 8872.566 - 8922.978: 92.2808% ( 158) 00:08:36.323 8922.978 - 8973.391: 93.0212% ( 127) 00:08:36.323 8973.391 - 9023.803: 93.6742% ( 112) 00:08:36.323 9023.803 - 9074.215: 94.1698% ( 85) 00:08:36.323 9074.215 - 9124.628: 94.6245% ( 78) 00:08:36.323 9124.628 - 9175.040: 94.9918% ( 63) 00:08:36.323 9175.040 - 9225.452: 95.2542% ( 45) 00:08:36.323 9225.452 - 9275.865: 95.5515% ( 51) 00:08:36.323 9275.865 - 9326.277: 95.8314% ( 48) 00:08:36.323 9326.277 - 9376.689: 96.0821% ( 43) 00:08:36.323 9376.689 - 9427.102: 96.2570% ( 30) 00:08:36.323 9427.102 - 9477.514: 96.4086% ( 26) 00:08:36.323 9477.514 - 9527.926: 96.5602% ( 26) 00:08:36.323 9527.926 - 9578.338: 96.6826% ( 21) 00:08:36.323 9578.338 - 9628.751: 96.8109% ( 22) 00:08:36.323 9628.751 - 9679.163: 96.9508% ( 24) 00:08:36.323 9679.163 - 9729.575: 97.0674% ( 20) 00:08:36.323 9729.575 - 9779.988: 97.1723% ( 18) 00:08:36.323 9779.988 - 9830.400: 97.2656% ( 16) 00:08:36.323 9830.400 - 9880.812: 97.3764% ( 19) 00:08:36.323 9880.812 - 9931.225: 97.4522% ( 13) 00:08:36.323 9931.225 - 9981.637: 97.5222% ( 12) 00:08:36.323 9981.637 - 10032.049: 97.5979% ( 13) 00:08:36.323 10032.049 - 10082.462: 97.6504% ( 9) 00:08:36.323 10082.462 - 10132.874: 97.7087% ( 10) 00:08:36.323 10132.874 - 10183.286: 97.7495% ( 7) 00:08:36.323 10183.286 - 10233.698: 97.7787% ( 5) 00:08:36.323 10233.698 - 10284.111: 97.8195% ( 7) 00:08:36.323 10284.111 - 10334.523: 97.8428% ( 4) 00:08:36.323 10334.523 - 10384.935: 97.8778% ( 6) 00:08:36.323 10384.935 - 10435.348: 97.9186% ( 7) 00:08:36.323 10435.348 - 10485.760: 97.9478% ( 5) 00:08:36.323 10485.760 - 10536.172: 97.9886% ( 7) 00:08:36.323 10536.172 - 10586.585: 98.0002% ( 2) 00:08:36.323 10586.585 - 10636.997: 98.0410% ( 7) 00:08:36.323 10636.997 - 10687.409: 98.0760% ( 6) 00:08:36.323 10687.409 - 10737.822: 98.1110% ( 6) 00:08:36.323 10737.822 - 10788.234: 98.1518% ( 7) 00:08:36.323 10788.234 - 10838.646: 98.1810% ( 5) 00:08:36.323 10838.646 - 10889.058: 98.2218% ( 7) 00:08:36.323 10889.058 - 10939.471: 98.2509% ( 5) 00:08:36.323 10939.471 - 10989.883: 98.2859% ( 6) 00:08:36.323 10989.883 - 11040.295: 98.3734% ( 15) 00:08:36.323 11040.295 - 11090.708: 98.3909% ( 3) 00:08:36.323 11090.708 - 11141.120: 98.4258% ( 6) 00:08:36.323 11141.120 - 11191.532: 98.4667% ( 7) 00:08:36.323 11191.532 - 11241.945: 98.4900% ( 4) 00:08:36.323 11241.945 - 11292.357: 98.5366% ( 8) 00:08:36.323 11292.357 - 11342.769: 98.5716% ( 6) 00:08:36.323 11342.769 - 11393.182: 98.6007% ( 5) 00:08:36.323 11393.182 - 11443.594: 98.6357% ( 6) 00:08:36.323 11443.594 - 11494.006: 98.6649% ( 5) 00:08:36.323 11494.006 - 11544.418: 98.7057% ( 7) 00:08:36.323 11544.418 - 11594.831: 98.7407% ( 6) 00:08:36.323 11594.831 - 11645.243: 98.7698% ( 5) 00:08:36.323 11645.243 - 11695.655: 98.7815% ( 2) 00:08:36.323 11695.655 - 11746.068: 98.7990% ( 3) 00:08:36.323 11746.068 - 11796.480: 98.8165% ( 3) 00:08:36.323 11846.892 - 11897.305: 98.8631% ( 8) 00:08:36.323 11897.305 - 11947.717: 98.8689% ( 1) 00:08:36.323 11947.717 - 11998.129: 98.8864% ( 3) 00:08:36.323 11998.129 - 12048.542: 98.9097% ( 4) 00:08:36.323 12048.542 - 12098.954: 98.9272% ( 3) 00:08:36.323 12098.954 - 12149.366: 98.9331% ( 1) 00:08:36.323 12149.366 - 12199.778: 98.9622% ( 5) 00:08:36.323 12199.778 - 12250.191: 98.9739% ( 2) 00:08:36.323 12250.191 - 12300.603: 98.9972% ( 4) 00:08:36.323 12300.603 - 12351.015: 99.0089% ( 2) 00:08:36.323 12351.015 - 12401.428: 99.0322% ( 4) 00:08:36.323 12401.428 - 12451.840: 99.0438% ( 2) 00:08:36.323 12451.840 - 12502.252: 99.0672% ( 4) 00:08:36.323 12502.252 - 12552.665: 99.0788% ( 2) 00:08:36.323 12552.665 - 12603.077: 99.1021% ( 4) 00:08:36.323 12603.077 - 12653.489: 99.1138% ( 2) 00:08:36.323 12653.489 - 12703.902: 99.1313% ( 3) 00:08:36.323 12703.902 - 12754.314: 99.1488% ( 3) 00:08:36.323 12754.314 - 12804.726: 99.1604% ( 2) 00:08:36.323 12804.726 - 12855.138: 99.1779% ( 3) 00:08:36.323 12855.138 - 12905.551: 99.1896% ( 2) 00:08:36.323 12905.551 - 13006.375: 99.2304% ( 7) 00:08:36.323 13006.375 - 13107.200: 99.2537% ( 4) 00:08:36.323 21979.766 - 22080.591: 99.2771% ( 4) 00:08:36.323 22080.591 - 22181.415: 99.3004% ( 4) 00:08:36.323 22181.415 - 22282.240: 99.3528% ( 9) 00:08:36.323 22282.240 - 22383.065: 99.3878% ( 6) 00:08:36.323 22383.065 - 22483.889: 99.4170% ( 5) 00:08:36.323 22483.889 - 22584.714: 99.4461% ( 5) 00:08:36.323 22584.714 - 22685.538: 99.4928% ( 8) 00:08:36.323 22685.538 - 22786.363: 99.5161% ( 4) 00:08:36.323 22786.363 - 22887.188: 99.5452% ( 5) 00:08:36.323 22887.188 - 22988.012: 99.5919% ( 8) 00:08:36.323 22988.012 - 23088.837: 99.6152% ( 4) 00:08:36.323 23088.837 - 23189.662: 99.6327% ( 3) 00:08:36.323 23189.662 - 23290.486: 99.6385% ( 1) 00:08:36.323 23290.486 - 23391.311: 99.6677% ( 5) 00:08:36.323 23391.311 - 23492.135: 99.6852% ( 3) 00:08:36.323 23492.135 - 23592.960: 99.7143% ( 5) 00:08:36.323 23592.960 - 23693.785: 99.7435% ( 5) 00:08:36.323 23895.434 - 23996.258: 99.7610% ( 3) 00:08:36.323 23996.258 - 24097.083: 99.7785% ( 3) 00:08:36.323 24097.083 - 24197.908: 99.8076% ( 5) 00:08:36.323 24197.908 - 24298.732: 99.8251% ( 3) 00:08:36.323 24298.732 - 24399.557: 99.8542% ( 5) 00:08:36.323 24399.557 - 24500.382: 99.8717% ( 3) 00:08:36.323 24500.382 - 24601.206: 99.8951% ( 4) 00:08:36.323 24601.206 - 24702.031: 99.9125% ( 3) 00:08:36.323 24702.031 - 24802.855: 99.9417% ( 5) 00:08:36.323 24802.855 - 24903.680: 99.9650% ( 4) 00:08:36.323 24903.680 - 25004.505: 99.9825% ( 3) 00:08:36.323 25004.505 - 25105.329: 100.0000% ( 3) 00:08:36.323 00:08:36.323 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:08:36.323 ============================================================================== 00:08:36.323 Range in us Cumulative IO count 00:08:36.323 5217.674 - 5242.880: 0.0117% ( 2) 00:08:36.323 5242.880 - 5268.086: 0.0175% ( 1) 00:08:36.323 5293.292 - 5318.498: 0.0233% ( 1) 00:08:36.323 5343.705 - 5368.911: 0.0350% ( 2) 00:08:36.323 5368.911 - 5394.117: 0.0408% ( 1) 00:08:36.323 5419.323 - 5444.529: 0.0525% ( 2) 00:08:36.323 5444.529 - 5469.735: 0.0641% ( 2) 00:08:36.323 5494.942 - 5520.148: 0.0700% ( 1) 00:08:36.323 5520.148 - 5545.354: 0.0816% ( 2) 00:08:36.323 5545.354 - 5570.560: 0.1166% ( 6) 00:08:36.323 5570.560 - 5595.766: 0.1866% ( 12) 00:08:36.323 5595.766 - 5620.972: 0.2565% ( 12) 00:08:36.323 5620.972 - 5646.178: 0.4081% ( 26) 00:08:36.323 5646.178 - 5671.385: 0.4956% ( 15) 00:08:36.323 5671.385 - 5696.591: 0.6180% ( 21) 00:08:36.323 5696.591 - 5721.797: 0.8979% ( 48) 00:08:36.323 5721.797 - 5747.003: 1.0844% ( 32) 00:08:36.323 5747.003 - 5772.209: 1.3643% ( 48) 00:08:36.323 5772.209 - 5797.415: 1.6849% ( 55) 00:08:36.323 5797.415 - 5822.622: 2.0056% ( 55) 00:08:36.323 5822.622 - 5847.828: 2.4487% ( 76) 00:08:36.323 5847.828 - 5873.034: 2.8451% ( 68) 00:08:36.323 5873.034 - 5898.240: 3.3524% ( 87) 00:08:36.323 5898.240 - 5923.446: 3.8130% ( 79) 00:08:36.323 5923.446 - 5948.652: 4.3085% ( 85) 00:08:36.323 5948.652 - 5973.858: 4.8741% ( 97) 00:08:36.323 5973.858 - 5999.065: 5.4804% ( 104) 00:08:36.323 5999.065 - 6024.271: 6.2733% ( 136) 00:08:36.323 6024.271 - 6049.477: 7.1712% ( 154) 00:08:36.323 6049.477 - 6074.683: 8.1565% ( 169) 00:08:36.323 6074.683 - 6099.889: 9.3225% ( 200) 00:08:36.323 6099.889 - 6125.095: 10.5236% ( 206) 00:08:36.323 6125.095 - 6150.302: 12.3309% ( 310) 00:08:36.323 6150.302 - 6175.508: 14.5931% ( 388) 00:08:36.323 6175.508 - 6200.714: 16.1964% ( 275) 00:08:36.323 6200.714 - 6225.920: 17.6131% ( 243) 00:08:36.323 6225.920 - 6251.126: 19.0473% ( 246) 00:08:36.323 6251.126 - 6276.332: 20.0443% ( 171) 00:08:36.323 6276.332 - 6301.538: 21.3911% ( 231) 00:08:36.323 6301.538 - 6326.745: 23.0236% ( 280) 00:08:36.323 6326.745 - 6351.951: 23.9331% ( 156) 00:08:36.323 6351.951 - 6377.157: 25.3148% ( 237) 00:08:36.323 6377.157 - 6402.363: 26.8190% ( 258) 00:08:36.323 6402.363 - 6427.569: 27.9443% ( 193) 00:08:36.323 6427.569 - 6452.775: 29.4776% ( 263) 00:08:36.323 6452.775 - 6503.188: 30.9293% ( 249) 00:08:36.323 6503.188 - 6553.600: 31.8622% ( 160) 00:08:36.323 6553.600 - 6604.012: 33.3314% ( 252) 00:08:36.323 6604.012 - 6654.425: 34.7831% ( 249) 00:08:36.323 6654.425 - 6704.837: 35.6576% ( 150) 00:08:36.323 6704.837 - 6755.249: 36.9753% ( 226) 00:08:36.323 6755.249 - 6805.662: 37.8440% ( 149) 00:08:36.323 6805.662 - 6856.074: 38.4387% ( 102) 00:08:36.323 6856.074 - 6906.486: 38.8934% ( 78) 00:08:36.323 6906.486 - 6956.898: 39.3307% ( 75) 00:08:36.323 6956.898 - 7007.311: 39.8379% ( 87) 00:08:36.323 7007.311 - 7057.723: 40.2752% ( 75) 00:08:36.323 7057.723 - 7108.135: 40.7474% ( 81) 00:08:36.323 7108.135 - 7158.548: 41.3246% ( 99) 00:08:36.323 7158.548 - 7208.960: 42.0592% ( 126) 00:08:36.323 7208.960 - 7259.372: 42.8988% ( 144) 00:08:36.324 7259.372 - 7309.785: 43.9774% ( 185) 00:08:36.324 7309.785 - 7360.197: 45.4932% ( 260) 00:08:36.324 7360.197 - 7410.609: 47.0266% ( 263) 00:08:36.324 7410.609 - 7461.022: 48.7873% ( 302) 00:08:36.324 7461.022 - 7511.434: 50.6996% ( 328) 00:08:36.324 7511.434 - 7561.846: 52.7285% ( 348) 00:08:36.324 7561.846 - 7612.258: 55.0140% ( 392) 00:08:36.324 7612.258 - 7662.671: 57.4219% ( 413) 00:08:36.324 7662.671 - 7713.083: 59.8647% ( 419) 00:08:36.324 7713.083 - 7763.495: 62.0977% ( 383) 00:08:36.324 7763.495 - 7813.908: 64.2782% ( 374) 00:08:36.324 7813.908 - 7864.320: 66.7619% ( 426) 00:08:36.324 7864.320 - 7914.732: 69.1348% ( 407) 00:08:36.324 7914.732 - 7965.145: 71.1521% ( 346) 00:08:36.324 7965.145 - 8015.557: 73.1285% ( 339) 00:08:36.324 8015.557 - 8065.969: 74.9184% ( 307) 00:08:36.324 8065.969 - 8116.382: 76.8948% ( 339) 00:08:36.324 8116.382 - 8166.794: 78.7080% ( 311) 00:08:36.324 8166.794 - 8217.206: 80.3347% ( 279) 00:08:36.324 8217.206 - 8267.618: 82.0487% ( 294) 00:08:36.324 8267.618 - 8318.031: 83.6637% ( 277) 00:08:36.324 8318.031 - 8368.443: 84.9580% ( 222) 00:08:36.324 8368.443 - 8418.855: 86.2465% ( 221) 00:08:36.324 8418.855 - 8469.268: 87.4883% ( 213) 00:08:36.324 8469.268 - 8519.680: 88.6194% ( 194) 00:08:36.324 8519.680 - 8570.092: 89.6630% ( 179) 00:08:36.324 8570.092 - 8620.505: 90.6483% ( 169) 00:08:36.324 8620.505 - 8670.917: 91.6045% ( 164) 00:08:36.324 8670.917 - 8721.329: 92.3974% ( 136) 00:08:36.324 8721.329 - 8771.742: 93.0445% ( 111) 00:08:36.324 8771.742 - 8822.154: 93.7267% ( 117) 00:08:36.324 8822.154 - 8872.566: 94.1873% ( 79) 00:08:36.324 8872.566 - 8922.978: 94.6245% ( 75) 00:08:36.324 8922.978 - 8973.391: 94.9452% ( 55) 00:08:36.324 8973.391 - 9023.803: 95.2542% ( 53) 00:08:36.324 9023.803 - 9074.215: 95.4816% ( 39) 00:08:36.324 9074.215 - 9124.628: 95.6740% ( 33) 00:08:36.324 9124.628 - 9175.040: 95.8256% ( 26) 00:08:36.324 9175.040 - 9225.452: 95.9771% ( 26) 00:08:36.324 9225.452 - 9275.865: 96.1579% ( 31) 00:08:36.324 9275.865 - 9326.277: 96.3095% ( 26) 00:08:36.324 9326.277 - 9376.689: 96.4261% ( 20) 00:08:36.324 9376.689 - 9427.102: 96.5835% ( 27) 00:08:36.324 9427.102 - 9477.514: 96.7234% ( 24) 00:08:36.324 9477.514 - 9527.926: 96.8575% ( 23) 00:08:36.324 9527.926 - 9578.338: 96.9275% ( 12) 00:08:36.324 9578.338 - 9628.751: 96.9974% ( 12) 00:08:36.324 9628.751 - 9679.163: 97.0557% ( 10) 00:08:36.324 9679.163 - 9729.575: 97.1140% ( 10) 00:08:36.324 9729.575 - 9779.988: 97.1723% ( 10) 00:08:36.324 9779.988 - 9830.400: 97.2190% ( 8) 00:08:36.324 9830.400 - 9880.812: 97.2773% ( 10) 00:08:36.324 9880.812 - 9931.225: 97.3181% ( 7) 00:08:36.324 9931.225 - 9981.637: 97.3589% ( 7) 00:08:36.324 9981.637 - 10032.049: 97.4114% ( 9) 00:08:36.324 10032.049 - 10082.462: 97.4522% ( 7) 00:08:36.324 10082.462 - 10132.874: 97.4988% ( 8) 00:08:36.324 10132.874 - 10183.286: 97.5571% ( 10) 00:08:36.324 10183.286 - 10233.698: 97.6038% ( 8) 00:08:36.324 10233.698 - 10284.111: 97.6679% ( 11) 00:08:36.324 10284.111 - 10334.523: 97.7379% ( 12) 00:08:36.324 10334.523 - 10384.935: 97.8312% ( 16) 00:08:36.324 10384.935 - 10435.348: 97.9011% ( 12) 00:08:36.324 10435.348 - 10485.760: 97.9769% ( 13) 00:08:36.324 10485.760 - 10536.172: 98.0469% ( 12) 00:08:36.324 10536.172 - 10586.585: 98.1168% ( 12) 00:08:36.324 10586.585 - 10636.997: 98.1926% ( 13) 00:08:36.324 10636.997 - 10687.409: 98.2626% ( 12) 00:08:36.324 10687.409 - 10737.822: 98.3500% ( 15) 00:08:36.324 10737.822 - 10788.234: 98.4142% ( 11) 00:08:36.324 10788.234 - 10838.646: 98.4667% ( 9) 00:08:36.324 10838.646 - 10889.058: 98.5191% ( 9) 00:08:36.324 10889.058 - 10939.471: 98.5658% ( 8) 00:08:36.324 10939.471 - 10989.883: 98.6007% ( 6) 00:08:36.324 10989.883 - 11040.295: 98.6357% ( 6) 00:08:36.324 11040.295 - 11090.708: 98.6824% ( 8) 00:08:36.324 11090.708 - 11141.120: 98.7174% ( 6) 00:08:36.324 11141.120 - 11191.532: 98.7582% ( 7) 00:08:36.324 11191.532 - 11241.945: 98.7873% ( 5) 00:08:36.324 11241.945 - 11292.357: 98.8165% ( 5) 00:08:36.324 11292.357 - 11342.769: 98.8573% ( 7) 00:08:36.324 11342.769 - 11393.182: 98.8981% ( 7) 00:08:36.324 11393.182 - 11443.594: 98.9331% ( 6) 00:08:36.324 11443.594 - 11494.006: 98.9739% ( 7) 00:08:36.324 11494.006 - 11544.418: 99.0089% ( 6) 00:08:36.324 11544.418 - 11594.831: 99.0438% ( 6) 00:08:36.324 11594.831 - 11645.243: 99.0672% ( 4) 00:08:36.324 11645.243 - 11695.655: 99.0847% ( 3) 00:08:36.324 11695.655 - 11746.068: 99.1080% ( 4) 00:08:36.324 11746.068 - 11796.480: 99.1255% ( 3) 00:08:36.324 11796.480 - 11846.892: 99.1488% ( 4) 00:08:36.324 11846.892 - 11897.305: 99.1721% ( 4) 00:08:36.324 11897.305 - 11947.717: 99.1896% ( 3) 00:08:36.324 11947.717 - 11998.129: 99.2129% ( 4) 00:08:36.324 11998.129 - 12048.542: 99.2362% ( 4) 00:08:36.324 12048.542 - 12098.954: 99.2537% ( 3) 00:08:36.324 20366.572 - 20467.397: 99.2771% ( 4) 00:08:36.324 20467.397 - 20568.222: 99.3004% ( 4) 00:08:36.324 20568.222 - 20669.046: 99.3295% ( 5) 00:08:36.324 20669.046 - 20769.871: 99.3528% ( 4) 00:08:36.324 20769.871 - 20870.695: 99.3762% ( 4) 00:08:36.324 20870.695 - 20971.520: 99.3995% ( 4) 00:08:36.324 20971.520 - 21072.345: 99.4228% ( 4) 00:08:36.324 21072.345 - 21173.169: 99.4461% ( 4) 00:08:36.324 21173.169 - 21273.994: 99.4694% ( 4) 00:08:36.324 21273.994 - 21374.818: 99.4928% ( 4) 00:08:36.324 21374.818 - 21475.643: 99.5219% ( 5) 00:08:36.324 21475.643 - 21576.468: 99.5452% ( 4) 00:08:36.324 21576.468 - 21677.292: 99.5686% ( 4) 00:08:36.324 21677.292 - 21778.117: 99.5861% ( 3) 00:08:36.324 21778.117 - 21878.942: 99.6152% ( 5) 00:08:36.324 21878.942 - 21979.766: 99.6385% ( 4) 00:08:36.324 21979.766 - 22080.591: 99.6618% ( 4) 00:08:36.324 22080.591 - 22181.415: 99.6852% ( 4) 00:08:36.324 22181.415 - 22282.240: 99.7085% ( 4) 00:08:36.324 22282.240 - 22383.065: 99.7260% ( 3) 00:08:36.324 22383.065 - 22483.889: 99.7551% ( 5) 00:08:36.324 22483.889 - 22584.714: 99.7785% ( 4) 00:08:36.324 22584.714 - 22685.538: 99.8018% ( 4) 00:08:36.324 22685.538 - 22786.363: 99.8251% ( 4) 00:08:36.324 22786.363 - 22887.188: 99.8542% ( 5) 00:08:36.324 22887.188 - 22988.012: 99.8776% ( 4) 00:08:36.324 22988.012 - 23088.837: 99.9009% ( 4) 00:08:36.324 23088.837 - 23189.662: 99.9242% ( 4) 00:08:36.324 23189.662 - 23290.486: 99.9475% ( 4) 00:08:36.324 23290.486 - 23391.311: 99.9767% ( 5) 00:08:36.324 23391.311 - 23492.135: 100.0000% ( 4) 00:08:36.324 00:08:36.324 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:08:36.324 ============================================================================== 00:08:36.324 Range in us Cumulative IO count 00:08:36.324 5217.674 - 5242.880: 0.0058% ( 1) 00:08:36.324 5268.086 - 5293.292: 0.0117% ( 1) 00:08:36.324 5293.292 - 5318.498: 0.0175% ( 1) 00:08:36.324 5343.705 - 5368.911: 0.0292% ( 2) 00:08:36.324 5368.911 - 5394.117: 0.0525% ( 4) 00:08:36.324 5394.117 - 5419.323: 0.0641% ( 2) 00:08:36.324 5419.323 - 5444.529: 0.0816% ( 3) 00:08:36.324 5444.529 - 5469.735: 0.0875% ( 1) 00:08:36.324 5469.735 - 5494.942: 0.0991% ( 2) 00:08:36.324 5494.942 - 5520.148: 0.1108% ( 2) 00:08:36.324 5520.148 - 5545.354: 0.1283% ( 3) 00:08:36.324 5545.354 - 5570.560: 0.1574% ( 5) 00:08:36.324 5570.560 - 5595.766: 0.2099% ( 9) 00:08:36.324 5595.766 - 5620.972: 0.3090% ( 17) 00:08:36.324 5620.972 - 5646.178: 0.4256% ( 20) 00:08:36.324 5646.178 - 5671.385: 0.5714% ( 25) 00:08:36.324 5671.385 - 5696.591: 0.7812% ( 36) 00:08:36.324 5696.591 - 5721.797: 1.0086% ( 39) 00:08:36.324 5721.797 - 5747.003: 1.3118% ( 52) 00:08:36.324 5747.003 - 5772.209: 1.7199% ( 70) 00:08:36.324 5772.209 - 5797.415: 2.1397% ( 72) 00:08:36.324 5797.415 - 5822.622: 2.6644% ( 90) 00:08:36.324 5822.622 - 5847.828: 3.1133% ( 77) 00:08:36.324 5847.828 - 5873.034: 3.6264% ( 88) 00:08:36.324 5873.034 - 5898.240: 4.1569% ( 91) 00:08:36.324 5898.240 - 5923.446: 4.6992% ( 93) 00:08:36.324 5923.446 - 5948.652: 5.2589% ( 96) 00:08:36.324 5948.652 - 5973.858: 5.7952% ( 92) 00:08:36.324 5973.858 - 5999.065: 6.4366% ( 110) 00:08:36.324 5999.065 - 6024.271: 7.1187% ( 117) 00:08:36.324 6024.271 - 6049.477: 7.7659% ( 111) 00:08:36.324 6049.477 - 6074.683: 8.4363% ( 115) 00:08:36.324 6074.683 - 6099.889: 9.3517% ( 157) 00:08:36.324 6099.889 - 6125.095: 10.6402% ( 221) 00:08:36.324 6125.095 - 6150.302: 12.0627% ( 244) 00:08:36.324 6150.302 - 6175.508: 13.7885% ( 296) 00:08:36.324 6175.508 - 6200.714: 15.2227% ( 246) 00:08:36.324 6200.714 - 6225.920: 16.9893% ( 303) 00:08:36.324 6225.920 - 6251.126: 18.0445% ( 181) 00:08:36.324 6251.126 - 6276.332: 19.0882% ( 179) 00:08:36.324 6276.332 - 6301.538: 19.9743% ( 152) 00:08:36.324 6301.538 - 6326.745: 21.2278% ( 215) 00:08:36.324 6326.745 - 6351.951: 22.2073% ( 168) 00:08:36.324 6351.951 - 6377.157: 23.3267% ( 192) 00:08:36.324 6377.157 - 6402.363: 24.8193% ( 256) 00:08:36.324 6402.363 - 6427.569: 25.9445% ( 193) 00:08:36.324 6427.569 - 6452.775: 26.6325% ( 118) 00:08:36.324 6452.775 - 6503.188: 28.1075% ( 253) 00:08:36.324 6503.188 - 6553.600: 29.1453% ( 178) 00:08:36.324 6553.600 - 6604.012: 30.6437% ( 257) 00:08:36.324 6604.012 - 6654.425: 31.9555% ( 225) 00:08:36.324 6654.425 - 6704.837: 33.3314% ( 236) 00:08:36.324 6704.837 - 6755.249: 34.4508% ( 192) 00:08:36.324 6755.249 - 6805.662: 35.7859% ( 229) 00:08:36.324 6805.662 - 6856.074: 37.3426% ( 267) 00:08:36.324 6856.074 - 6906.486: 38.1938% ( 146) 00:08:36.324 6906.486 - 6956.898: 38.9167% ( 124) 00:08:36.324 6956.898 - 7007.311: 39.7271% ( 139) 00:08:36.324 7007.311 - 7057.723: 40.7125% ( 169) 00:08:36.325 7057.723 - 7108.135: 41.5637% ( 146) 00:08:36.325 7108.135 - 7158.548: 42.4674% ( 155) 00:08:36.325 7158.548 - 7208.960: 43.4235% ( 164) 00:08:36.325 7208.960 - 7259.372: 44.7586% ( 229) 00:08:36.325 7259.372 - 7309.785: 46.0704% ( 225) 00:08:36.325 7309.785 - 7360.197: 47.5105% ( 247) 00:08:36.325 7360.197 - 7410.609: 49.3470% ( 315) 00:08:36.325 7410.609 - 7461.022: 51.2127% ( 320) 00:08:36.325 7461.022 - 7511.434: 53.0084% ( 308) 00:08:36.325 7511.434 - 7561.846: 55.1131% ( 361) 00:08:36.325 7561.846 - 7612.258: 57.3344% ( 381) 00:08:36.325 7612.258 - 7662.671: 59.2817% ( 334) 00:08:36.325 7662.671 - 7713.083: 61.2115% ( 331) 00:08:36.325 7713.083 - 7763.495: 63.1122% ( 326) 00:08:36.325 7763.495 - 7813.908: 65.0245% ( 328) 00:08:36.325 7813.908 - 7864.320: 67.2225% ( 377) 00:08:36.325 7864.320 - 7914.732: 69.6187% ( 411) 00:08:36.325 7914.732 - 7965.145: 71.7642% ( 368) 00:08:36.325 7965.145 - 8015.557: 73.8223% ( 353) 00:08:36.325 8015.557 - 8065.969: 75.5714% ( 300) 00:08:36.325 8065.969 - 8116.382: 77.2155% ( 282) 00:08:36.325 8116.382 - 8166.794: 78.7896% ( 270) 00:08:36.325 8166.794 - 8217.206: 80.2355% ( 248) 00:08:36.325 8217.206 - 8267.618: 81.6056% ( 235) 00:08:36.325 8267.618 - 8318.031: 82.9408% ( 229) 00:08:36.325 8318.031 - 8368.443: 84.2526% ( 225) 00:08:36.325 8368.443 - 8418.855: 85.5177% ( 217) 00:08:36.325 8418.855 - 8469.268: 86.7246% ( 207) 00:08:36.325 8469.268 - 8519.680: 87.8673% ( 196) 00:08:36.325 8519.680 - 8570.092: 88.9692% ( 189) 00:08:36.325 8570.092 - 8620.505: 90.0303% ( 182) 00:08:36.325 8620.505 - 8670.917: 90.9981% ( 166) 00:08:36.325 8670.917 - 8721.329: 91.8552% ( 147) 00:08:36.325 8721.329 - 8771.742: 92.5723% ( 123) 00:08:36.325 8771.742 - 8822.154: 93.2603% ( 118) 00:08:36.325 8822.154 - 8872.566: 93.8783% ( 106) 00:08:36.325 8872.566 - 8922.978: 94.4146% ( 92) 00:08:36.325 8922.978 - 8973.391: 94.8228% ( 70) 00:08:36.325 8973.391 - 9023.803: 95.1784% ( 61) 00:08:36.325 9023.803 - 9074.215: 95.4583% ( 48) 00:08:36.325 9074.215 - 9124.628: 95.7264% ( 46) 00:08:36.325 9124.628 - 9175.040: 95.9538% ( 39) 00:08:36.325 9175.040 - 9225.452: 96.1754% ( 38) 00:08:36.325 9225.452 - 9275.865: 96.3853% ( 36) 00:08:36.325 9275.865 - 9326.277: 96.5835% ( 34) 00:08:36.325 9326.277 - 9376.689: 96.7526% ( 29) 00:08:36.325 9376.689 - 9427.102: 96.8983% ( 25) 00:08:36.325 9427.102 - 9477.514: 97.0674% ( 29) 00:08:36.325 9477.514 - 9527.926: 97.2365% ( 29) 00:08:36.325 9527.926 - 9578.338: 97.3531% ( 20) 00:08:36.325 9578.338 - 9628.751: 97.4522% ( 17) 00:08:36.325 9628.751 - 9679.163: 97.5222% ( 12) 00:08:36.325 9679.163 - 9729.575: 97.5805% ( 10) 00:08:36.325 9729.575 - 9779.988: 97.6154% ( 6) 00:08:36.325 9779.988 - 9830.400: 97.6504% ( 6) 00:08:36.325 9830.400 - 9880.812: 97.6854% ( 6) 00:08:36.325 9880.812 - 9931.225: 97.7204% ( 6) 00:08:36.325 9931.225 - 9981.637: 97.7495% ( 5) 00:08:36.325 9981.637 - 10032.049: 97.7903% ( 7) 00:08:36.325 10032.049 - 10082.462: 97.8253% ( 6) 00:08:36.325 10082.462 - 10132.874: 97.8545% ( 5) 00:08:36.325 10132.874 - 10183.286: 97.8895% ( 6) 00:08:36.325 10183.286 - 10233.698: 97.9244% ( 6) 00:08:36.325 10233.698 - 10284.111: 97.9594% ( 6) 00:08:36.325 10284.111 - 10334.523: 98.0002% ( 7) 00:08:36.325 10334.523 - 10384.935: 98.0352% ( 6) 00:08:36.325 10384.935 - 10435.348: 98.0760% ( 7) 00:08:36.325 10435.348 - 10485.760: 98.0993% ( 4) 00:08:36.325 10485.760 - 10536.172: 98.1168% ( 3) 00:08:36.325 10536.172 - 10586.585: 98.1402% ( 4) 00:08:36.325 10586.585 - 10636.997: 98.1576% ( 3) 00:08:36.325 10636.997 - 10687.409: 98.1751% ( 3) 00:08:36.325 10687.409 - 10737.822: 98.1926% ( 3) 00:08:36.325 10737.822 - 10788.234: 98.2043% ( 2) 00:08:36.325 10788.234 - 10838.646: 98.2276% ( 4) 00:08:36.325 10838.646 - 10889.058: 98.2451% ( 3) 00:08:36.325 10889.058 - 10939.471: 98.2626% ( 3) 00:08:36.325 10939.471 - 10989.883: 98.2859% ( 4) 00:08:36.325 10989.883 - 11040.295: 98.3034% ( 3) 00:08:36.325 11040.295 - 11090.708: 98.3267% ( 4) 00:08:36.325 11090.708 - 11141.120: 98.3442% ( 3) 00:08:36.325 11141.120 - 11191.532: 98.3617% ( 3) 00:08:36.325 11191.532 - 11241.945: 98.3850% ( 4) 00:08:36.325 11241.945 - 11292.357: 98.4025% ( 3) 00:08:36.325 11292.357 - 11342.769: 98.4200% ( 3) 00:08:36.325 11342.769 - 11393.182: 98.5016% ( 14) 00:08:36.325 11393.182 - 11443.594: 98.5774% ( 13) 00:08:36.325 11443.594 - 11494.006: 98.6066% ( 5) 00:08:36.325 11494.006 - 11544.418: 98.6357% ( 5) 00:08:36.325 11544.418 - 11594.831: 98.6590% ( 4) 00:08:36.325 11594.831 - 11645.243: 98.6765% ( 3) 00:08:36.325 11645.243 - 11695.655: 98.6999% ( 4) 00:08:36.325 11695.655 - 11746.068: 98.7174% ( 3) 00:08:36.325 11746.068 - 11796.480: 98.7407% ( 4) 00:08:36.325 11796.480 - 11846.892: 98.7582% ( 3) 00:08:36.325 11846.892 - 11897.305: 98.7815% ( 4) 00:08:36.325 11897.305 - 11947.717: 98.7990% ( 3) 00:08:36.325 11947.717 - 11998.129: 98.8223% ( 4) 00:08:36.325 11998.129 - 12048.542: 98.8398% ( 3) 00:08:36.325 12048.542 - 12098.954: 98.8573% ( 3) 00:08:36.325 12098.954 - 12149.366: 98.8806% ( 4) 00:08:36.325 12149.366 - 12199.778: 98.9039% ( 4) 00:08:36.325 12199.778 - 12250.191: 98.9214% ( 3) 00:08:36.325 12250.191 - 12300.603: 98.9447% ( 4) 00:08:36.325 12300.603 - 12351.015: 98.9622% ( 3) 00:08:36.325 12351.015 - 12401.428: 98.9797% ( 3) 00:08:36.325 12401.428 - 12451.840: 98.9972% ( 3) 00:08:36.325 12451.840 - 12502.252: 99.0205% ( 4) 00:08:36.325 12502.252 - 12552.665: 99.0438% ( 4) 00:08:36.325 12552.665 - 12603.077: 99.0613% ( 3) 00:08:36.325 12603.077 - 12653.489: 99.0847% ( 4) 00:08:36.325 12653.489 - 12703.902: 99.1021% ( 3) 00:08:36.325 12703.902 - 12754.314: 99.1255% ( 4) 00:08:36.325 12754.314 - 12804.726: 99.1488% ( 4) 00:08:36.325 12804.726 - 12855.138: 99.1721% ( 4) 00:08:36.325 12855.138 - 12905.551: 99.1896% ( 3) 00:08:36.325 12905.551 - 13006.375: 99.2362% ( 8) 00:08:36.325 13006.375 - 13107.200: 99.2537% ( 3) 00:08:36.325 18955.028 - 19055.852: 99.2712% ( 3) 00:08:36.325 19055.852 - 19156.677: 99.2945% ( 4) 00:08:36.325 19156.677 - 19257.502: 99.3237% ( 5) 00:08:36.325 19257.502 - 19358.326: 99.3470% ( 4) 00:08:36.325 19358.326 - 19459.151: 99.3703% ( 4) 00:08:36.325 19459.151 - 19559.975: 99.3937% ( 4) 00:08:36.325 19559.975 - 19660.800: 99.4228% ( 5) 00:08:36.325 19660.800 - 19761.625: 99.4461% ( 4) 00:08:36.325 19761.625 - 19862.449: 99.4694% ( 4) 00:08:36.325 19862.449 - 19963.274: 99.4869% ( 3) 00:08:36.325 19963.274 - 20064.098: 99.5103% ( 4) 00:08:36.325 20064.098 - 20164.923: 99.5336% ( 4) 00:08:36.325 20164.923 - 20265.748: 99.5627% ( 5) 00:08:36.325 20265.748 - 20366.572: 99.5861% ( 4) 00:08:36.325 20366.572 - 20467.397: 99.6094% ( 4) 00:08:36.325 20467.397 - 20568.222: 99.6327% ( 4) 00:08:36.325 20568.222 - 20669.046: 99.6560% ( 4) 00:08:36.325 20669.046 - 20769.871: 99.6793% ( 4) 00:08:36.325 20769.871 - 20870.695: 99.7027% ( 4) 00:08:36.325 20870.695 - 20971.520: 99.7318% ( 5) 00:08:36.325 20971.520 - 21072.345: 99.7551% ( 4) 00:08:36.325 21072.345 - 21173.169: 99.7785% ( 4) 00:08:36.325 21173.169 - 21273.994: 99.8018% ( 4) 00:08:36.325 21273.994 - 21374.818: 99.8251% ( 4) 00:08:36.325 21374.818 - 21475.643: 99.8484% ( 4) 00:08:36.325 21475.643 - 21576.468: 99.8776% ( 5) 00:08:36.325 21576.468 - 21677.292: 99.9009% ( 4) 00:08:36.325 21677.292 - 21778.117: 99.9242% ( 4) 00:08:36.325 21778.117 - 21878.942: 99.9475% ( 4) 00:08:36.325 21878.942 - 21979.766: 99.9767% ( 5) 00:08:36.325 21979.766 - 22080.591: 100.0000% ( 4) 00:08:36.325 00:08:36.325 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:08:36.325 ============================================================================== 00:08:36.325 Range in us Cumulative IO count 00:08:36.325 5217.674 - 5242.880: 0.0058% ( 1) 00:08:36.325 5242.880 - 5268.086: 0.0117% ( 1) 00:08:36.326 5268.086 - 5293.292: 0.0175% ( 1) 00:08:36.326 5318.498 - 5343.705: 0.0408% ( 4) 00:08:36.326 5343.705 - 5368.911: 0.0583% ( 3) 00:08:36.326 5368.911 - 5394.117: 0.0641% ( 1) 00:08:36.326 5394.117 - 5419.323: 0.0816% ( 3) 00:08:36.326 5419.323 - 5444.529: 0.0991% ( 3) 00:08:36.326 5444.529 - 5469.735: 0.1283% ( 5) 00:08:36.326 5469.735 - 5494.942: 0.1574% ( 5) 00:08:36.326 5494.942 - 5520.148: 0.1924% ( 6) 00:08:36.326 5520.148 - 5545.354: 0.2099% ( 3) 00:08:36.326 5545.354 - 5570.560: 0.2274% ( 3) 00:08:36.326 5570.560 - 5595.766: 0.2624% ( 6) 00:08:36.326 5595.766 - 5620.972: 0.2740% ( 2) 00:08:36.326 5620.972 - 5646.178: 0.3498% ( 13) 00:08:36.326 5646.178 - 5671.385: 0.4781% ( 22) 00:08:36.326 5671.385 - 5696.591: 0.6180% ( 24) 00:08:36.326 5696.591 - 5721.797: 0.8279% ( 36) 00:08:36.326 5721.797 - 5747.003: 1.1136% ( 49) 00:08:36.326 5747.003 - 5772.209: 1.4750% ( 62) 00:08:36.326 5772.209 - 5797.415: 1.8657% ( 67) 00:08:36.326 5797.415 - 5822.622: 2.2738% ( 70) 00:08:36.326 5822.622 - 5847.828: 2.6528% ( 65) 00:08:36.326 5847.828 - 5873.034: 3.1133% ( 79) 00:08:36.326 5873.034 - 5898.240: 3.6322% ( 89) 00:08:36.326 5898.240 - 5923.446: 4.1686% ( 92) 00:08:36.326 5923.446 - 5948.652: 4.7225% ( 95) 00:08:36.326 5948.652 - 5973.858: 5.3055% ( 100) 00:08:36.326 5973.858 - 5999.065: 5.9410% ( 109) 00:08:36.326 5999.065 - 6024.271: 6.6465% ( 121) 00:08:36.326 6024.271 - 6049.477: 7.3228% ( 116) 00:08:36.326 6049.477 - 6074.683: 8.0574% ( 126) 00:08:36.326 6074.683 - 6099.889: 9.3458% ( 221) 00:08:36.326 6099.889 - 6125.095: 10.3253% ( 168) 00:08:36.326 6125.095 - 6150.302: 11.9869% ( 285) 00:08:36.326 6150.302 - 6175.508: 13.0714% ( 186) 00:08:36.326 6175.508 - 6200.714: 14.1325% ( 182) 00:08:36.326 6200.714 - 6225.920: 15.6250% ( 256) 00:08:36.326 6225.920 - 6251.126: 17.5956% ( 338) 00:08:36.326 6251.126 - 6276.332: 18.8433% ( 214) 00:08:36.326 6276.332 - 6301.538: 20.0910% ( 214) 00:08:36.326 6301.538 - 6326.745: 21.3444% ( 215) 00:08:36.326 6326.745 - 6351.951: 22.6038% ( 216) 00:08:36.326 6351.951 - 6377.157: 23.8223% ( 209) 00:08:36.326 6377.157 - 6402.363: 24.9708% ( 197) 00:08:36.326 6402.363 - 6427.569: 25.9853% ( 174) 00:08:36.326 6427.569 - 6452.775: 26.9181% ( 160) 00:08:36.326 6452.775 - 6503.188: 28.6497% ( 297) 00:08:36.326 6503.188 - 6553.600: 29.9265% ( 219) 00:08:36.326 6553.600 - 6604.012: 30.9876% ( 182) 00:08:36.326 6604.012 - 6654.425: 32.2353% ( 214) 00:08:36.326 6654.425 - 6704.837: 33.7570% ( 261) 00:08:36.326 6704.837 - 6755.249: 34.9347% ( 202) 00:08:36.326 6755.249 - 6805.662: 36.2232% ( 221) 00:08:36.326 6805.662 - 6856.074: 37.2201% ( 171) 00:08:36.326 6856.074 - 6906.486: 37.9722% ( 129) 00:08:36.326 6906.486 - 6956.898: 38.5728% ( 103) 00:08:36.326 6956.898 - 7007.311: 39.1908% ( 106) 00:08:36.326 7007.311 - 7057.723: 40.2635% ( 184) 00:08:36.326 7057.723 - 7108.135: 41.1322% ( 149) 00:08:36.326 7108.135 - 7158.548: 41.9310% ( 137) 00:08:36.326 7158.548 - 7208.960: 42.8930% ( 165) 00:08:36.326 7208.960 - 7259.372: 44.0707% ( 202) 00:08:36.326 7259.372 - 7309.785: 45.4991% ( 245) 00:08:36.326 7309.785 - 7360.197: 47.2889% ( 307) 00:08:36.326 7360.197 - 7410.609: 48.9272% ( 281) 00:08:36.326 7410.609 - 7461.022: 50.7113% ( 306) 00:08:36.326 7461.022 - 7511.434: 52.9676% ( 387) 00:08:36.326 7511.434 - 7561.846: 55.0956% ( 365) 00:08:36.326 7561.846 - 7612.258: 57.0954% ( 343) 00:08:36.326 7612.258 - 7662.671: 59.1768% ( 357) 00:08:36.326 7662.671 - 7713.083: 61.1824% ( 344) 00:08:36.326 7713.083 - 7763.495: 63.2113% ( 348) 00:08:36.326 7763.495 - 7813.908: 65.4209% ( 379) 00:08:36.326 7813.908 - 7864.320: 67.2750% ( 318) 00:08:36.326 7864.320 - 7914.732: 69.4729% ( 377) 00:08:36.326 7914.732 - 7965.145: 71.4202% ( 334) 00:08:36.326 7965.145 - 8015.557: 73.3792% ( 336) 00:08:36.326 8015.557 - 8065.969: 75.2332% ( 318) 00:08:36.326 8065.969 - 8116.382: 77.1863% ( 335) 00:08:36.326 8116.382 - 8166.794: 78.9296% ( 299) 00:08:36.326 8166.794 - 8217.206: 80.3288% ( 240) 00:08:36.326 8217.206 - 8267.618: 81.7514% ( 244) 00:08:36.326 8267.618 - 8318.031: 83.1098% ( 233) 00:08:36.326 8318.031 - 8368.443: 84.4275% ( 226) 00:08:36.326 8368.443 - 8418.855: 85.9200% ( 256) 00:08:36.326 8418.855 - 8469.268: 87.2143% ( 222) 00:08:36.326 8469.268 - 8519.680: 88.4562% ( 213) 00:08:36.326 8519.680 - 8570.092: 89.6164% ( 199) 00:08:36.326 8570.092 - 8620.505: 90.7125% ( 188) 00:08:36.326 8620.505 - 8670.917: 91.6686% ( 164) 00:08:36.326 8670.917 - 8721.329: 92.4499% ( 134) 00:08:36.326 8721.329 - 8771.742: 93.1495% ( 120) 00:08:36.326 8771.742 - 8822.154: 93.8783% ( 125) 00:08:36.326 8822.154 - 8872.566: 94.3855% ( 87) 00:08:36.326 8872.566 - 8922.978: 94.7411% ( 61) 00:08:36.326 8922.978 - 8973.391: 95.0385% ( 51) 00:08:36.326 8973.391 - 9023.803: 95.2484% ( 36) 00:08:36.326 9023.803 - 9074.215: 95.4932% ( 42) 00:08:36.326 9074.215 - 9124.628: 95.7148% ( 38) 00:08:36.326 9124.628 - 9175.040: 95.9188% ( 35) 00:08:36.326 9175.040 - 9225.452: 96.1404% ( 38) 00:08:36.326 9225.452 - 9275.865: 96.3270% ( 32) 00:08:36.326 9275.865 - 9326.277: 96.4902% ( 28) 00:08:36.326 9326.277 - 9376.689: 96.6418% ( 26) 00:08:36.326 9376.689 - 9427.102: 96.7934% ( 26) 00:08:36.326 9427.102 - 9477.514: 96.9216% ( 22) 00:08:36.326 9477.514 - 9527.926: 97.0674% ( 25) 00:08:36.326 9527.926 - 9578.338: 97.1723% ( 18) 00:08:36.326 9578.338 - 9628.751: 97.2423% ( 12) 00:08:36.326 9628.751 - 9679.163: 97.3123% ( 12) 00:08:36.326 9679.163 - 9729.575: 97.3822% ( 12) 00:08:36.326 9729.575 - 9779.988: 97.4580% ( 13) 00:08:36.326 9779.988 - 9830.400: 97.5222% ( 11) 00:08:36.326 9830.400 - 9880.812: 97.5979% ( 13) 00:08:36.326 9880.812 - 9931.225: 97.6679% ( 12) 00:08:36.326 9931.225 - 9981.637: 97.7379% ( 12) 00:08:36.326 9981.637 - 10032.049: 97.7903% ( 9) 00:08:36.326 10032.049 - 10082.462: 97.8370% ( 8) 00:08:36.326 10082.462 - 10132.874: 97.8661% ( 5) 00:08:36.326 10132.874 - 10183.286: 97.9069% ( 7) 00:08:36.326 10183.286 - 10233.698: 97.9419% ( 6) 00:08:36.326 10233.698 - 10284.111: 97.9769% ( 6) 00:08:36.326 10284.111 - 10334.523: 98.0119% ( 6) 00:08:36.326 10334.523 - 10384.935: 98.0469% ( 6) 00:08:36.326 10384.935 - 10435.348: 98.0877% ( 7) 00:08:36.326 10435.348 - 10485.760: 98.1227% ( 6) 00:08:36.326 10485.760 - 10536.172: 98.1576% ( 6) 00:08:36.326 10536.172 - 10586.585: 98.1926% ( 6) 00:08:36.326 10586.585 - 10636.997: 98.2334% ( 7) 00:08:36.326 10636.997 - 10687.409: 98.2684% ( 6) 00:08:36.326 10687.409 - 10737.822: 98.3034% ( 6) 00:08:36.326 10737.822 - 10788.234: 98.3442% ( 7) 00:08:36.326 10788.234 - 10838.646: 98.3850% ( 7) 00:08:36.326 10838.646 - 10889.058: 98.4200% ( 6) 00:08:36.326 10889.058 - 10939.471: 98.4608% ( 7) 00:08:36.326 10939.471 - 10989.883: 98.4841% ( 4) 00:08:36.326 10989.883 - 11040.295: 98.5075% ( 4) 00:08:36.587 11544.418 - 11594.831: 98.5133% ( 1) 00:08:36.587 11594.831 - 11645.243: 98.5308% ( 3) 00:08:36.587 11645.243 - 11695.655: 98.5541% ( 4) 00:08:36.587 11695.655 - 11746.068: 98.5716% ( 3) 00:08:36.587 11746.068 - 11796.480: 98.5891% ( 3) 00:08:36.587 11796.480 - 11846.892: 98.6124% ( 4) 00:08:36.587 11846.892 - 11897.305: 98.6299% ( 3) 00:08:36.587 11897.305 - 11947.717: 98.6474% ( 3) 00:08:36.587 11947.717 - 11998.129: 98.6649% ( 3) 00:08:36.587 11998.129 - 12048.542: 98.6824% ( 3) 00:08:36.587 12048.542 - 12098.954: 98.6999% ( 3) 00:08:36.587 12098.954 - 12149.366: 98.7174% ( 3) 00:08:36.587 12149.366 - 12199.778: 98.7348% ( 3) 00:08:36.587 12199.778 - 12250.191: 98.7582% ( 4) 00:08:36.587 12250.191 - 12300.603: 98.7757% ( 3) 00:08:36.587 12300.603 - 12351.015: 98.7931% ( 3) 00:08:36.587 12351.015 - 12401.428: 98.8165% ( 4) 00:08:36.587 12401.428 - 12451.840: 98.8340% ( 3) 00:08:36.587 12451.840 - 12502.252: 98.8573% ( 4) 00:08:36.587 12502.252 - 12552.665: 98.8748% ( 3) 00:08:36.587 12552.665 - 12603.077: 98.8923% ( 3) 00:08:36.587 12603.077 - 12653.489: 98.9156% ( 4) 00:08:36.587 12653.489 - 12703.902: 98.9331% ( 3) 00:08:36.587 12703.902 - 12754.314: 98.9564% ( 4) 00:08:36.587 12754.314 - 12804.726: 98.9739% ( 3) 00:08:36.587 12804.726 - 12855.138: 98.9914% ( 3) 00:08:36.587 12855.138 - 12905.551: 99.0089% ( 3) 00:08:36.587 12905.551 - 13006.375: 99.0438% ( 6) 00:08:36.587 13006.375 - 13107.200: 99.0847% ( 7) 00:08:36.587 13107.200 - 13208.025: 99.1196% ( 6) 00:08:36.587 13208.025 - 13308.849: 99.1604% ( 7) 00:08:36.587 13308.849 - 13409.674: 99.2013% ( 7) 00:08:36.587 13409.674 - 13510.498: 99.2421% ( 7) 00:08:36.587 13510.498 - 13611.323: 99.2537% ( 2) 00:08:36.587 17745.132 - 17845.957: 99.2771% ( 4) 00:08:36.587 17845.957 - 17946.782: 99.3004% ( 4) 00:08:36.587 17946.782 - 18047.606: 99.3295% ( 5) 00:08:36.587 18047.606 - 18148.431: 99.3528% ( 4) 00:08:36.587 18148.431 - 18249.255: 99.3762% ( 4) 00:08:36.587 18249.255 - 18350.080: 99.3995% ( 4) 00:08:36.587 18350.080 - 18450.905: 99.4228% ( 4) 00:08:36.587 18450.905 - 18551.729: 99.4520% ( 5) 00:08:36.587 18551.729 - 18652.554: 99.4753% ( 4) 00:08:36.587 18652.554 - 18753.378: 99.4986% ( 4) 00:08:36.587 18753.378 - 18854.203: 99.5219% ( 4) 00:08:36.587 18854.203 - 18955.028: 99.5452% ( 4) 00:08:36.587 18955.028 - 19055.852: 99.5686% ( 4) 00:08:36.587 19055.852 - 19156.677: 99.5919% ( 4) 00:08:36.587 19156.677 - 19257.502: 99.6152% ( 4) 00:08:36.587 19257.502 - 19358.326: 99.6444% ( 5) 00:08:36.587 19358.326 - 19459.151: 99.6677% ( 4) 00:08:36.587 19459.151 - 19559.975: 99.6852% ( 3) 00:08:36.587 19559.975 - 19660.800: 99.7085% ( 4) 00:08:36.587 19660.800 - 19761.625: 99.7376% ( 5) 00:08:36.587 19761.625 - 19862.449: 99.7610% ( 4) 00:08:36.587 19862.449 - 19963.274: 99.7843% ( 4) 00:08:36.587 19963.274 - 20064.098: 99.8134% ( 5) 00:08:36.587 20064.098 - 20164.923: 99.8368% ( 4) 00:08:36.587 20164.923 - 20265.748: 99.8601% ( 4) 00:08:36.587 20265.748 - 20366.572: 99.8834% ( 4) 00:08:36.587 20366.572 - 20467.397: 99.9067% ( 4) 00:08:36.587 20467.397 - 20568.222: 99.9300% ( 4) 00:08:36.587 20568.222 - 20669.046: 99.9534% ( 4) 00:08:36.587 20669.046 - 20769.871: 99.9825% ( 5) 00:08:36.587 20769.871 - 20870.695: 100.0000% ( 3) 00:08:36.587 00:08:36.587 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:08:36.587 ============================================================================== 00:08:36.587 Range in us Cumulative IO count 00:08:36.587 5293.292 - 5318.498: 0.0058% ( 1) 00:08:36.587 5318.498 - 5343.705: 0.0174% ( 2) 00:08:36.587 5343.705 - 5368.911: 0.0347% ( 3) 00:08:36.587 5368.911 - 5394.117: 0.0637% ( 5) 00:08:36.587 5394.117 - 5419.323: 0.0926% ( 5) 00:08:36.587 5419.323 - 5444.529: 0.1157% ( 4) 00:08:36.587 5444.529 - 5469.735: 0.1562% ( 7) 00:08:36.587 5469.735 - 5494.942: 0.1852% ( 5) 00:08:36.587 5494.942 - 5520.148: 0.2199% ( 6) 00:08:36.587 5520.148 - 5545.354: 0.2662% ( 8) 00:08:36.587 5545.354 - 5570.560: 0.3414% ( 13) 00:08:36.587 5570.560 - 5595.766: 0.4340% ( 16) 00:08:36.587 5595.766 - 5620.972: 0.5150% ( 14) 00:08:36.587 5620.972 - 5646.178: 0.6250% ( 19) 00:08:36.587 5646.178 - 5671.385: 1.0069% ( 66) 00:08:36.587 5671.385 - 5696.591: 1.1863% ( 31) 00:08:36.587 5696.591 - 5721.797: 1.3889% ( 35) 00:08:36.587 5721.797 - 5747.003: 1.6262% ( 41) 00:08:36.587 5747.003 - 5772.209: 1.9560% ( 57) 00:08:36.587 5772.209 - 5797.415: 2.3322% ( 65) 00:08:36.587 5797.415 - 5822.622: 2.7662% ( 75) 00:08:36.587 5822.622 - 5847.828: 3.2870% ( 90) 00:08:36.587 5847.828 - 5873.034: 3.7847% ( 86) 00:08:36.587 5873.034 - 5898.240: 4.2882% ( 87) 00:08:36.587 5898.240 - 5923.446: 4.8148% ( 91) 00:08:36.587 5923.446 - 5948.652: 5.3819% ( 98) 00:08:36.587 5948.652 - 5973.858: 6.0243% ( 111) 00:08:36.587 5973.858 - 5999.065: 6.6898% ( 115) 00:08:36.587 5999.065 - 6024.271: 7.3727% ( 118) 00:08:36.587 6024.271 - 6049.477: 8.0498% ( 117) 00:08:36.587 6049.477 - 6074.683: 8.9120% ( 149) 00:08:36.587 6074.683 - 6099.889: 9.8958% ( 170) 00:08:36.587 6099.889 - 6125.095: 11.0938% ( 207) 00:08:36.587 6125.095 - 6150.302: 12.3900% ( 224) 00:08:36.587 6150.302 - 6175.508: 14.5312% ( 370) 00:08:36.587 6175.508 - 6200.714: 16.1574% ( 281) 00:08:36.587 6200.714 - 6225.920: 17.6389% ( 256) 00:08:36.587 6225.920 - 6251.126: 19.1204% ( 256) 00:08:36.587 6251.126 - 6276.332: 20.5903% ( 254) 00:08:36.587 6276.332 - 6301.538: 21.8634% ( 220) 00:08:36.587 6301.538 - 6326.745: 23.0440% ( 204) 00:08:36.587 6326.745 - 6351.951: 23.8657% ( 142) 00:08:36.587 6351.951 - 6377.157: 25.0926% ( 212) 00:08:36.587 6377.157 - 6402.363: 26.2326% ( 197) 00:08:36.587 6402.363 - 6427.569: 27.3727% ( 197) 00:08:36.587 6427.569 - 6452.775: 28.1424% ( 133) 00:08:36.587 6452.775 - 6503.188: 29.9248% ( 308) 00:08:36.587 6503.188 - 6553.600: 31.0301% ( 191) 00:08:36.587 6553.600 - 6604.012: 32.0081% ( 169) 00:08:36.587 6604.012 - 6654.425: 33.0671% ( 183) 00:08:36.587 6654.425 - 6704.837: 35.0926% ( 350) 00:08:36.587 6704.837 - 6755.249: 36.1111% ( 176) 00:08:36.588 6755.249 - 6805.662: 36.9271% ( 141) 00:08:36.588 6805.662 - 6856.074: 37.8183% ( 154) 00:08:36.588 6856.074 - 6906.486: 38.4028% ( 101) 00:08:36.588 6906.486 - 6956.898: 38.9757% ( 99) 00:08:36.588 6956.898 - 7007.311: 39.5139% ( 93) 00:08:36.588 7007.311 - 7057.723: 40.0868% ( 99) 00:08:36.588 7057.723 - 7108.135: 40.6192% ( 92) 00:08:36.588 7108.135 - 7158.548: 41.3600% ( 128) 00:08:36.588 7158.548 - 7208.960: 42.2627% ( 156) 00:08:36.588 7208.960 - 7259.372: 43.3796% ( 193) 00:08:36.588 7259.372 - 7309.785: 44.7454% ( 236) 00:08:36.588 7309.785 - 7360.197: 46.2616% ( 262) 00:08:36.588 7360.197 - 7410.609: 48.1192% ( 321) 00:08:36.588 7410.609 - 7461.022: 50.2199% ( 363) 00:08:36.588 7461.022 - 7511.434: 52.0139% ( 310) 00:08:36.588 7511.434 - 7561.846: 53.9410% ( 333) 00:08:36.588 7561.846 - 7612.258: 56.1343% ( 379) 00:08:36.588 7612.258 - 7662.671: 58.5069% ( 410) 00:08:36.588 7662.671 - 7713.083: 61.0012% ( 431) 00:08:36.588 7713.083 - 7763.495: 63.2581% ( 390) 00:08:36.588 7763.495 - 7813.908: 65.3299% ( 358) 00:08:36.588 7813.908 - 7864.320: 67.4711% ( 370) 00:08:36.588 7864.320 - 7914.732: 69.7164% ( 388) 00:08:36.588 7914.732 - 7965.145: 71.6435% ( 333) 00:08:36.588 7965.145 - 8015.557: 73.2639% ( 280) 00:08:36.588 8015.557 - 8065.969: 74.9884% ( 298) 00:08:36.588 8065.969 - 8116.382: 76.6782% ( 292) 00:08:36.588 8116.382 - 8166.794: 78.3044% ( 281) 00:08:36.588 8166.794 - 8217.206: 80.1273% ( 315) 00:08:36.588 8217.206 - 8267.618: 81.7303% ( 277) 00:08:36.588 8267.618 - 8318.031: 83.3391% ( 278) 00:08:36.588 8318.031 - 8368.443: 84.6470% ( 226) 00:08:36.588 8368.443 - 8418.855: 85.9317% ( 222) 00:08:36.588 8418.855 - 8469.268: 87.1759% ( 215) 00:08:36.588 8469.268 - 8519.680: 88.2986% ( 194) 00:08:36.588 8519.680 - 8570.092: 89.3403% ( 180) 00:08:36.588 8570.092 - 8620.505: 90.3704% ( 178) 00:08:36.588 8620.505 - 8670.917: 91.3194% ( 164) 00:08:36.588 8670.917 - 8721.329: 92.2627% ( 163) 00:08:36.588 8721.329 - 8771.742: 93.0208% ( 131) 00:08:36.588 8771.742 - 8822.154: 93.6169% ( 103) 00:08:36.588 8822.154 - 8872.566: 94.1551% ( 93) 00:08:36.588 8872.566 - 8922.978: 94.6412% ( 84) 00:08:36.588 8922.978 - 8973.391: 95.0752% ( 75) 00:08:36.588 8973.391 - 9023.803: 95.4803% ( 70) 00:08:36.588 9023.803 - 9074.215: 95.8449% ( 63) 00:08:36.588 9074.215 - 9124.628: 96.1343% ( 50) 00:08:36.588 9124.628 - 9175.040: 96.3542% ( 38) 00:08:36.588 9175.040 - 9225.452: 96.5104% ( 27) 00:08:36.588 9225.452 - 9275.865: 96.6377% ( 22) 00:08:36.588 9275.865 - 9326.277: 96.7477% ( 19) 00:08:36.588 9326.277 - 9376.689: 96.8634% ( 20) 00:08:36.588 9376.689 - 9427.102: 96.9850% ( 21) 00:08:36.588 9427.102 - 9477.514: 97.0891% ( 18) 00:08:36.588 9477.514 - 9527.926: 97.1701% ( 14) 00:08:36.588 9527.926 - 9578.338: 97.2454% ( 13) 00:08:36.588 9578.338 - 9628.751: 97.3148% ( 12) 00:08:36.588 9628.751 - 9679.163: 97.3958% ( 14) 00:08:36.588 9679.163 - 9729.575: 97.4653% ( 12) 00:08:36.588 9729.575 - 9779.988: 97.5116% ( 8) 00:08:36.588 9779.988 - 9830.400: 97.5521% ( 7) 00:08:36.588 9830.400 - 9880.812: 97.5868% ( 6) 00:08:36.588 9880.812 - 9931.225: 97.6157% ( 5) 00:08:36.588 9931.225 - 9981.637: 97.6389% ( 4) 00:08:36.588 9981.637 - 10032.049: 97.6562% ( 3) 00:08:36.588 10032.049 - 10082.462: 97.6794% ( 4) 00:08:36.588 10082.462 - 10132.874: 97.7025% ( 4) 00:08:36.588 10132.874 - 10183.286: 97.7257% ( 4) 00:08:36.588 10183.286 - 10233.698: 97.7836% ( 10) 00:08:36.588 10233.698 - 10284.111: 97.8356% ( 9) 00:08:36.588 10284.111 - 10334.523: 97.8935% ( 10) 00:08:36.588 10334.523 - 10384.935: 97.9340% ( 7) 00:08:36.588 10384.935 - 10435.348: 97.9688% ( 6) 00:08:36.588 10435.348 - 10485.760: 98.0035% ( 6) 00:08:36.588 10485.760 - 10536.172: 98.0324% ( 5) 00:08:36.588 10536.172 - 10586.585: 98.0671% ( 6) 00:08:36.588 10586.585 - 10636.997: 98.1019% ( 6) 00:08:36.588 10636.997 - 10687.409: 98.1366% ( 6) 00:08:36.588 10687.409 - 10737.822: 98.1597% ( 4) 00:08:36.588 10737.822 - 10788.234: 98.1944% ( 6) 00:08:36.588 10788.234 - 10838.646: 98.2176% ( 4) 00:08:36.588 10838.646 - 10889.058: 98.2407% ( 4) 00:08:36.588 10889.058 - 10939.471: 98.2697% ( 5) 00:08:36.588 10939.471 - 10989.883: 98.3044% ( 6) 00:08:36.588 10989.883 - 11040.295: 98.3333% ( 5) 00:08:36.588 11040.295 - 11090.708: 98.3623% ( 5) 00:08:36.588 11090.708 - 11141.120: 98.3912% ( 5) 00:08:36.588 11141.120 - 11191.532: 98.4317% ( 7) 00:08:36.588 11191.532 - 11241.945: 98.4780% ( 8) 00:08:36.588 11241.945 - 11292.357: 98.5301% ( 9) 00:08:36.588 11292.357 - 11342.769: 98.5822% ( 9) 00:08:36.588 11342.769 - 11393.182: 98.6343% ( 9) 00:08:36.588 11393.182 - 11443.594: 98.6863% ( 9) 00:08:36.588 11443.594 - 11494.006: 98.7326% ( 8) 00:08:36.588 11494.006 - 11544.418: 98.7789% ( 8) 00:08:36.588 11544.418 - 11594.831: 98.8368% ( 10) 00:08:36.588 11594.831 - 11645.243: 98.8889% ( 9) 00:08:36.588 11645.243 - 11695.655: 98.9410% ( 9) 00:08:36.588 11695.655 - 11746.068: 98.9931% ( 9) 00:08:36.588 11746.068 - 11796.480: 99.0220% ( 5) 00:08:36.588 11796.480 - 11846.892: 99.0509% ( 5) 00:08:36.588 11846.892 - 11897.305: 99.0799% ( 5) 00:08:36.588 11897.305 - 11947.717: 99.1088% ( 5) 00:08:36.588 11947.717 - 11998.129: 99.1435% ( 6) 00:08:36.588 11998.129 - 12048.542: 99.1725% ( 5) 00:08:36.588 12048.542 - 12098.954: 99.2072% ( 6) 00:08:36.588 12098.954 - 12149.366: 99.2419% ( 6) 00:08:36.588 12149.366 - 12199.778: 99.2708% ( 5) 00:08:36.588 12199.778 - 12250.191: 99.3056% ( 6) 00:08:36.588 12250.191 - 12300.603: 99.3345% ( 5) 00:08:36.588 12300.603 - 12351.015: 99.3634% ( 5) 00:08:36.588 12351.015 - 12401.428: 99.3981% ( 6) 00:08:36.588 12401.428 - 12451.840: 99.4271% ( 5) 00:08:36.588 12451.840 - 12502.252: 99.4560% ( 5) 00:08:36.588 12502.252 - 12552.665: 99.4907% ( 6) 00:08:36.588 12552.665 - 12603.077: 99.5197% ( 5) 00:08:36.588 12603.077 - 12653.489: 99.5486% ( 5) 00:08:36.588 12653.489 - 12703.902: 99.5833% ( 6) 00:08:36.588 12703.902 - 12754.314: 99.6123% ( 5) 00:08:36.588 12754.314 - 12804.726: 99.6528% ( 7) 00:08:36.588 12804.726 - 12855.138: 99.6817% ( 5) 00:08:36.588 12855.138 - 12905.551: 99.7106% ( 5) 00:08:36.588 12905.551 - 13006.375: 99.7743% ( 11) 00:08:36.588 13006.375 - 13107.200: 99.8264% ( 9) 00:08:36.588 13107.200 - 13208.025: 99.8495% ( 4) 00:08:36.588 13208.025 - 13308.849: 99.8727% ( 4) 00:08:36.588 13308.849 - 13409.674: 99.8958% ( 4) 00:08:36.588 13409.674 - 13510.498: 99.9190% ( 4) 00:08:36.588 13510.498 - 13611.323: 99.9479% ( 5) 00:08:36.588 13611.323 - 13712.148: 99.9711% ( 4) 00:08:36.588 13712.148 - 13812.972: 99.9942% ( 4) 00:08:36.588 13812.972 - 13913.797: 100.0000% ( 1) 00:08:36.588 00:08:36.588 14:09:37 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:36.588 00:08:36.588 real 0m2.586s 00:08:36.588 user 0m2.297s 00:08:36.588 sys 0m0.196s 00:08:36.588 14:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.588 14:09:37 -- common/autotest_common.sh@10 -- # set +x 00:08:36.588 ************************************ 00:08:36.588 END TEST nvme_perf 00:08:36.588 ************************************ 00:08:36.589 14:09:37 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:36.589 14:09:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:36.589 14:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.589 14:09:37 -- common/autotest_common.sh@10 -- # set +x 00:08:36.589 ************************************ 00:08:36.589 START TEST nvme_hello_world 00:08:36.589 ************************************ 00:08:36.589 14:09:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:36.879 Initializing NVMe Controllers 00:08:36.879 Attached to 0000:00:09.0 00:08:36.879 Namespace ID: 1 size: 1GB 00:08:36.879 Attached to 0000:00:06.0 00:08:36.879 Namespace ID: 1 size: 6GB 00:08:36.879 Attached to 0000:00:07.0 00:08:36.879 Namespace ID: 1 size: 5GB 00:08:36.879 Attached to 0000:00:08.0 00:08:36.879 Namespace ID: 1 size: 4GB 00:08:36.879 Namespace ID: 2 size: 4GB 00:08:36.879 Namespace ID: 3 size: 4GB 00:08:36.879 Initialization complete. 00:08:36.879 INFO: using host memory buffer for IO 00:08:36.879 Hello world! 00:08:36.879 INFO: using host memory buffer for IO 00:08:36.879 Hello world! 00:08:36.879 INFO: using host memory buffer for IO 00:08:36.879 Hello world! 00:08:36.879 INFO: using host memory buffer for IO 00:08:36.879 Hello world! 00:08:36.879 INFO: using host memory buffer for IO 00:08:36.879 Hello world! 00:08:36.879 INFO: using host memory buffer for IO 00:08:36.879 Hello world! 00:08:36.879 00:08:36.879 real 0m0.271s 00:08:36.879 user 0m0.134s 00:08:36.879 sys 0m0.086s 00:08:36.879 14:09:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.879 ************************************ 00:08:36.879 END TEST nvme_hello_world 00:08:36.879 ************************************ 00:08:36.879 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.879 14:09:38 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:36.879 14:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.879 14:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.879 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:08:36.879 ************************************ 00:08:36.879 START TEST nvme_sgl 00:08:36.879 ************************************ 00:08:36.879 14:09:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:36.879 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:08:36.879 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:08:36.879 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:08:36.879 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:08:37.137 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:08:37.137 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:08:37.137 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:08:37.137 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:08:37.137 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:08:37.137 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:08:37.137 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:08:37.137 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:08:37.137 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:08:37.137 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:08:37.137 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:08:37.137 NVMe Readv/Writev Request test 00:08:37.137 Attached to 0000:00:09.0 00:08:37.137 Attached to 0000:00:06.0 00:08:37.137 Attached to 0000:00:07.0 00:08:37.137 Attached to 0000:00:08.0 00:08:37.137 0000:00:06.0: build_io_request_2 test passed 00:08:37.137 0000:00:06.0: build_io_request_4 test passed 00:08:37.137 0000:00:06.0: build_io_request_5 test passed 00:08:37.137 0000:00:06.0: build_io_request_6 test passed 00:08:37.137 0000:00:06.0: build_io_request_7 test passed 00:08:37.137 0000:00:06.0: build_io_request_10 test passed 00:08:37.137 0000:00:07.0: build_io_request_2 test passed 00:08:37.137 0000:00:07.0: build_io_request_4 test passed 00:08:37.137 0000:00:07.0: build_io_request_5 test passed 00:08:37.137 0000:00:07.0: build_io_request_6 test passed 00:08:37.137 0000:00:07.0: build_io_request_7 test passed 00:08:37.137 0000:00:07.0: build_io_request_10 test passed 00:08:37.137 Cleaning up... 00:08:37.137 00:08:37.137 real 0m0.378s 00:08:37.137 user 0m0.241s 00:08:37.137 sys 0m0.089s 00:08:37.137 ************************************ 00:08:37.137 END TEST nvme_sgl 00:08:37.137 14:09:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.137 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.137 ************************************ 00:08:37.137 14:09:38 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:37.137 14:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.137 14:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.137 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.137 ************************************ 00:08:37.137 START TEST nvme_e2edp 00:08:37.137 ************************************ 00:08:37.137 14:09:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:37.395 NVMe Write/Read with End-to-End data protection test 00:08:37.395 Attached to 0000:00:09.0 00:08:37.395 Attached to 0000:00:06.0 00:08:37.395 Attached to 0000:00:07.0 00:08:37.395 Attached to 0000:00:08.0 00:08:37.395 Cleaning up... 00:08:37.395 00:08:37.395 real 0m0.189s 00:08:37.395 user 0m0.056s 00:08:37.395 sys 0m0.089s 00:08:37.395 14:09:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.395 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.395 ************************************ 00:08:37.395 END TEST nvme_e2edp 00:08:37.395 ************************************ 00:08:37.395 14:09:38 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:37.395 14:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.395 14:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.395 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:08:37.395 ************************************ 00:08:37.395 START TEST nvme_reserve 00:08:37.395 ************************************ 00:08:37.395 14:09:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:37.653 ===================================================== 00:08:37.653 NVMe Controller at PCI bus 0, device 9, function 0 00:08:37.653 ===================================================== 00:08:37.653 Reservations: Not Supported 00:08:37.653 ===================================================== 00:08:37.653 NVMe Controller at PCI bus 0, device 6, function 0 00:08:37.653 ===================================================== 00:08:37.653 Reservations: Not Supported 00:08:37.653 ===================================================== 00:08:37.653 NVMe Controller at PCI bus 0, device 7, function 0 00:08:37.653 ===================================================== 00:08:37.653 Reservations: Not Supported 00:08:37.653 ===================================================== 00:08:37.653 NVMe Controller at PCI bus 0, device 8, function 0 00:08:37.653 ===================================================== 00:08:37.653 Reservations: Not Supported 00:08:37.653 Reservation test passed 00:08:37.653 00:08:37.653 real 0m0.209s 00:08:37.653 user 0m0.060s 00:08:37.653 sys 0m0.094s 00:08:37.653 14:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.653 ************************************ 00:08:37.653 END TEST nvme_reserve 00:08:37.653 ************************************ 00:08:37.653 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.653 14:09:39 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:37.653 14:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.653 14:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.653 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.653 ************************************ 00:08:37.653 START TEST nvme_err_injection 00:08:37.653 ************************************ 00:08:37.653 14:09:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:37.911 NVMe Error Injection test 00:08:37.911 Attached to 0000:00:09.0 00:08:37.911 Attached to 0000:00:06.0 00:08:37.911 Attached to 0000:00:07.0 00:08:37.911 Attached to 0000:00:08.0 00:08:37.911 0000:00:09.0: get features failed as expected 00:08:37.911 0000:00:06.0: get features failed as expected 00:08:37.911 0000:00:07.0: get features failed as expected 00:08:37.911 0000:00:08.0: get features failed as expected 00:08:37.911 0000:00:09.0: get features successfully as expected 00:08:37.911 0000:00:06.0: get features successfully as expected 00:08:37.911 0000:00:07.0: get features successfully as expected 00:08:37.911 0000:00:08.0: get features successfully as expected 00:08:37.911 0000:00:09.0: read failed as expected 00:08:37.911 0000:00:06.0: read failed as expected 00:08:37.911 0000:00:07.0: read failed as expected 00:08:37.911 0000:00:08.0: read failed as expected 00:08:37.911 0000:00:09.0: read successfully as expected 00:08:37.911 0000:00:06.0: read successfully as expected 00:08:37.911 0000:00:07.0: read successfully as expected 00:08:37.911 0000:00:08.0: read successfully as expected 00:08:37.911 Cleaning up... 00:08:37.911 00:08:37.911 real 0m0.246s 00:08:37.911 user 0m0.106s 00:08:37.911 sys 0m0.095s 00:08:37.911 14:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.911 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.911 ************************************ 00:08:37.911 END TEST nvme_err_injection 00:08:37.911 ************************************ 00:08:37.911 14:09:39 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:37.911 14:09:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:37.911 14:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.912 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:08:37.912 ************************************ 00:08:37.912 START TEST nvme_overhead 00:08:37.912 ************************************ 00:08:37.912 14:09:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:39.283 Initializing NVMe Controllers 00:08:39.283 Attached to 0000:00:09.0 00:08:39.283 Attached to 0000:00:06.0 00:08:39.283 Attached to 0000:00:07.0 00:08:39.283 Attached to 0000:00:08.0 00:08:39.283 Initialization complete. Launching workers. 00:08:39.283 submit (in ns) avg, min, max = 11342.8, 9979.2, 63154.6 00:08:39.283 complete (in ns) avg, min, max = 7625.4, 7271.5, 54396.9 00:08:39.283 00:08:39.283 Submit histogram 00:08:39.283 ================ 00:08:39.283 Range in us Cumulative Count 00:08:39.283 9.945 - 9.994: 0.0083% ( 1) 00:08:39.283 9.994 - 10.043: 0.0167% ( 1) 00:08:39.283 10.732 - 10.782: 0.0250% ( 1) 00:08:39.283 10.782 - 10.831: 0.0999% ( 9) 00:08:39.283 10.831 - 10.880: 0.3748% ( 33) 00:08:39.283 10.880 - 10.929: 1.1993% ( 99) 00:08:39.283 10.929 - 10.978: 3.8561% ( 319) 00:08:39.283 10.978 - 11.028: 10.7937% ( 833) 00:08:39.284 11.028 - 11.077: 24.1276% ( 1601) 00:08:39.284 11.077 - 11.126: 41.9089% ( 2135) 00:08:39.284 11.126 - 11.175: 59.5736% ( 2121) 00:08:39.284 11.175 - 11.225: 72.9658% ( 1608) 00:08:39.284 11.225 - 11.274: 81.1193% ( 979) 00:08:39.284 11.274 - 11.323: 85.8916% ( 573) 00:08:39.284 11.323 - 11.372: 88.6733% ( 334) 00:08:39.284 11.372 - 11.422: 90.1391% ( 176) 00:08:39.284 11.422 - 11.471: 91.2218% ( 130) 00:08:39.284 11.471 - 11.520: 91.9547% ( 88) 00:08:39.284 11.520 - 11.569: 92.4044% ( 54) 00:08:39.284 11.569 - 11.618: 92.8125% ( 49) 00:08:39.284 11.618 - 11.668: 93.1873% ( 45) 00:08:39.284 11.668 - 11.717: 93.4871% ( 36) 00:08:39.284 11.717 - 11.766: 93.8203% ( 40) 00:08:39.284 11.766 - 11.815: 94.0451% ( 27) 00:08:39.284 11.815 - 11.865: 94.4199% ( 45) 00:08:39.284 11.865 - 11.914: 94.8114% ( 47) 00:08:39.284 11.914 - 11.963: 95.1612% ( 42) 00:08:39.284 11.963 - 12.012: 95.5359% ( 45) 00:08:39.284 12.012 - 12.062: 95.7941% ( 31) 00:08:39.284 12.062 - 12.111: 96.0440% ( 30) 00:08:39.284 12.111 - 12.160: 96.2772% ( 28) 00:08:39.284 12.160 - 12.209: 96.4104% ( 16) 00:08:39.284 12.209 - 12.258: 96.4687% ( 7) 00:08:39.284 12.258 - 12.308: 96.5603% ( 11) 00:08:39.284 12.308 - 12.357: 96.6270% ( 8) 00:08:39.284 12.357 - 12.406: 96.6520% ( 3) 00:08:39.284 12.406 - 12.455: 96.6686% ( 2) 00:08:39.284 12.455 - 12.505: 96.7103% ( 5) 00:08:39.284 12.505 - 12.554: 96.7352% ( 3) 00:08:39.284 12.554 - 12.603: 96.7436% ( 1) 00:08:39.284 12.603 - 12.702: 96.7519% ( 1) 00:08:39.284 12.702 - 12.800: 96.7769% ( 3) 00:08:39.284 12.800 - 12.898: 96.8269% ( 6) 00:08:39.284 12.898 - 12.997: 96.9768% ( 18) 00:08:39.284 12.997 - 13.095: 97.1183% ( 17) 00:08:39.284 13.095 - 13.194: 97.2766% ( 19) 00:08:39.284 13.194 - 13.292: 97.3515% ( 9) 00:08:39.284 13.292 - 13.391: 97.5098% ( 19) 00:08:39.284 13.391 - 13.489: 97.6514% ( 17) 00:08:39.284 13.489 - 13.588: 97.7763% ( 15) 00:08:39.284 13.588 - 13.686: 97.9179% ( 17) 00:08:39.284 13.686 - 13.785: 98.0178% ( 12) 00:08:39.284 13.785 - 13.883: 98.0511% ( 4) 00:08:39.284 13.883 - 13.982: 98.1094% ( 7) 00:08:39.284 13.982 - 14.080: 98.1428% ( 4) 00:08:39.284 14.080 - 14.178: 98.1594% ( 2) 00:08:39.284 14.178 - 14.277: 98.1761% ( 2) 00:08:39.284 14.375 - 14.474: 98.1927% ( 2) 00:08:39.284 14.474 - 14.572: 98.2344% ( 5) 00:08:39.284 14.572 - 14.671: 98.3010% ( 8) 00:08:39.284 14.671 - 14.769: 98.3343% ( 4) 00:08:39.284 14.769 - 14.868: 98.3759% ( 5) 00:08:39.284 14.868 - 14.966: 98.4176% ( 5) 00:08:39.284 14.966 - 15.065: 98.4342% ( 2) 00:08:39.284 15.065 - 15.163: 98.4592% ( 3) 00:08:39.284 15.163 - 15.262: 98.5009% ( 5) 00:08:39.284 15.262 - 15.360: 98.5175% ( 2) 00:08:39.284 15.360 - 15.458: 98.5259% ( 1) 00:08:39.284 15.458 - 15.557: 98.5508% ( 3) 00:08:39.284 15.557 - 15.655: 98.5592% ( 1) 00:08:39.284 15.655 - 15.754: 98.5925% ( 4) 00:08:39.284 15.754 - 15.852: 98.6175% ( 3) 00:08:39.284 15.951 - 16.049: 98.6341% ( 2) 00:08:39.284 16.049 - 16.148: 98.6425% ( 1) 00:08:39.284 16.148 - 16.246: 98.6674% ( 3) 00:08:39.284 16.443 - 16.542: 98.6841% ( 2) 00:08:39.284 16.542 - 16.640: 98.7174% ( 4) 00:08:39.284 16.640 - 16.738: 98.7924% ( 9) 00:08:39.284 16.738 - 16.837: 98.9506% ( 19) 00:08:39.284 16.837 - 16.935: 99.0922% ( 17) 00:08:39.284 16.935 - 17.034: 99.2838% ( 23) 00:08:39.284 17.034 - 17.132: 99.3421% ( 7) 00:08:39.284 17.132 - 17.231: 99.3837% ( 5) 00:08:39.284 17.231 - 17.329: 99.4253% ( 5) 00:08:39.284 17.329 - 17.428: 99.5253% ( 12) 00:08:39.284 17.428 - 17.526: 99.5503% ( 3) 00:08:39.284 17.526 - 17.625: 99.6002% ( 6) 00:08:39.284 17.625 - 17.723: 99.6252% ( 3) 00:08:39.284 17.723 - 17.822: 99.6585% ( 4) 00:08:39.284 17.920 - 18.018: 99.6835% ( 3) 00:08:39.284 18.018 - 18.117: 99.7002% ( 2) 00:08:39.284 18.117 - 18.215: 99.7085% ( 1) 00:08:39.284 18.314 - 18.412: 99.7168% ( 1) 00:08:39.284 18.511 - 18.609: 99.7252% ( 1) 00:08:39.284 18.905 - 19.003: 99.7418% ( 2) 00:08:39.284 19.298 - 19.397: 99.7501% ( 1) 00:08:39.284 19.397 - 19.495: 99.7585% ( 1) 00:08:39.284 19.692 - 19.791: 99.7668% ( 1) 00:08:39.284 19.988 - 20.086: 99.7751% ( 1) 00:08:39.284 20.086 - 20.185: 99.7918% ( 2) 00:08:39.284 20.382 - 20.480: 99.8001% ( 1) 00:08:39.284 20.480 - 20.578: 99.8084% ( 1) 00:08:39.284 20.578 - 20.677: 99.8168% ( 1) 00:08:39.284 20.775 - 20.874: 99.8418% ( 3) 00:08:39.284 21.268 - 21.366: 99.8501% ( 1) 00:08:39.284 21.366 - 21.465: 99.8584% ( 1) 00:08:39.284 21.465 - 21.563: 99.8667% ( 1) 00:08:39.284 23.040 - 23.138: 99.8834% ( 2) 00:08:39.284 25.403 - 25.600: 99.8917% ( 1) 00:08:39.284 26.388 - 26.585: 99.9001% ( 1) 00:08:39.284 27.766 - 27.963: 99.9084% ( 1) 00:08:39.284 27.963 - 28.160: 99.9167% ( 1) 00:08:39.284 29.932 - 30.129: 99.9250% ( 1) 00:08:39.284 34.265 - 34.462: 99.9334% ( 1) 00:08:39.284 36.234 - 36.431: 99.9417% ( 1) 00:08:39.284 36.431 - 36.628: 99.9500% ( 1) 00:08:39.284 39.582 - 39.778: 99.9584% ( 1) 00:08:39.284 44.702 - 44.898: 99.9667% ( 1) 00:08:39.284 49.231 - 49.428: 99.9750% ( 1) 00:08:39.284 50.806 - 51.200: 99.9833% ( 1) 00:08:39.284 61.440 - 61.834: 99.9917% ( 1) 00:08:39.284 63.015 - 63.409: 100.0000% ( 1) 00:08:39.284 00:08:39.284 Complete histogram 00:08:39.284 ================== 00:08:39.284 Range in us Cumulative Count 00:08:39.284 7.237 - 7.286: 0.0500% ( 6) 00:08:39.284 7.286 - 7.335: 1.0744% ( 123) 00:08:39.284 7.335 - 7.385: 6.4546% ( 646) 00:08:39.284 7.385 - 7.434: 19.9717% ( 1623) 00:08:39.284 7.434 - 7.483: 40.7262% ( 2492) 00:08:39.284 7.483 - 7.532: 62.9050% ( 2663) 00:08:39.284 7.532 - 7.582: 78.7541% ( 1903) 00:08:39.284 7.582 - 7.631: 88.2235% ( 1137) 00:08:39.284 7.631 - 7.680: 93.0374% ( 578) 00:08:39.284 7.680 - 7.729: 95.4610% ( 291) 00:08:39.284 7.729 - 7.778: 96.7769% ( 158) 00:08:39.284 7.778 - 7.828: 97.5431% ( 92) 00:08:39.284 7.828 - 7.877: 97.9845% ( 53) 00:08:39.284 7.877 - 7.926: 98.1594% ( 21) 00:08:39.285 7.926 - 7.975: 98.2177% ( 7) 00:08:39.285 7.975 - 8.025: 98.2760% ( 7) 00:08:39.285 8.025 - 8.074: 98.3010% ( 3) 00:08:39.285 8.074 - 8.123: 98.3093% ( 1) 00:08:39.285 8.123 - 8.172: 98.3176% ( 1) 00:08:39.285 8.222 - 8.271: 98.3260% ( 1) 00:08:39.285 8.517 - 8.566: 98.3426% ( 2) 00:08:39.285 8.615 - 8.665: 98.3510% ( 1) 00:08:39.285 9.846 - 9.895: 98.3593% ( 1) 00:08:39.285 9.945 - 9.994: 98.3676% ( 1) 00:08:39.285 10.338 - 10.388: 98.3759% ( 1) 00:08:39.285 10.535 - 10.585: 98.3843% ( 1) 00:08:39.285 10.683 - 10.732: 98.3926% ( 1) 00:08:39.285 10.732 - 10.782: 98.4009% ( 1) 00:08:39.285 10.880 - 10.929: 98.4093% ( 1) 00:08:39.285 11.077 - 11.126: 98.4176% ( 1) 00:08:39.285 11.175 - 11.225: 98.4259% ( 1) 00:08:39.285 11.422 - 11.471: 98.4342% ( 1) 00:08:39.285 11.471 - 11.520: 98.4426% ( 1) 00:08:39.285 11.914 - 11.963: 98.4509% ( 1) 00:08:39.285 12.012 - 12.062: 98.4592% ( 1) 00:08:39.285 12.160 - 12.209: 98.4759% ( 2) 00:08:39.285 12.209 - 12.258: 98.4925% ( 2) 00:08:39.285 12.308 - 12.357: 98.5009% ( 1) 00:08:39.285 12.800 - 12.898: 98.5092% ( 1) 00:08:39.285 12.898 - 12.997: 98.5758% ( 8) 00:08:39.285 12.997 - 13.095: 98.7174% ( 17) 00:08:39.285 13.095 - 13.194: 98.8257% ( 13) 00:08:39.285 13.194 - 13.292: 99.0256% ( 24) 00:08:39.285 13.292 - 13.391: 99.1921% ( 20) 00:08:39.285 13.391 - 13.489: 99.2838% ( 11) 00:08:39.285 13.489 - 13.588: 99.4003% ( 14) 00:08:39.285 13.588 - 13.686: 99.4420% ( 5) 00:08:39.285 13.686 - 13.785: 99.5336% ( 11) 00:08:39.285 13.785 - 13.883: 99.5919% ( 7) 00:08:39.285 13.883 - 13.982: 99.6502% ( 7) 00:08:39.285 13.982 - 14.080: 99.7002% ( 6) 00:08:39.285 14.080 - 14.178: 99.7168% ( 2) 00:08:39.285 14.178 - 14.277: 99.7418% ( 3) 00:08:39.285 14.277 - 14.375: 99.7501% ( 1) 00:08:39.285 14.474 - 14.572: 99.7585% ( 1) 00:08:39.285 14.572 - 14.671: 99.7751% ( 2) 00:08:39.285 14.769 - 14.868: 99.7835% ( 1) 00:08:39.285 14.966 - 15.065: 99.7918% ( 1) 00:08:39.285 15.557 - 15.655: 99.8251% ( 4) 00:08:39.285 15.754 - 15.852: 99.8334% ( 1) 00:08:39.285 16.345 - 16.443: 99.8418% ( 1) 00:08:39.285 16.443 - 16.542: 99.8667% ( 3) 00:08:39.285 16.542 - 16.640: 99.8751% ( 1) 00:08:39.285 16.738 - 16.837: 99.8834% ( 1) 00:08:39.285 16.935 - 17.034: 99.8917% ( 1) 00:08:39.285 17.034 - 17.132: 99.9001% ( 1) 00:08:39.285 17.231 - 17.329: 99.9084% ( 1) 00:08:39.285 17.625 - 17.723: 99.9167% ( 1) 00:08:39.285 17.822 - 17.920: 99.9250% ( 1) 00:08:39.285 18.215 - 18.314: 99.9334% ( 1) 00:08:39.285 18.806 - 18.905: 99.9417% ( 1) 00:08:39.285 21.268 - 21.366: 99.9500% ( 1) 00:08:39.285 27.963 - 28.160: 99.9584% ( 1) 00:08:39.285 28.948 - 29.145: 99.9667% ( 1) 00:08:39.285 36.037 - 36.234: 99.9750% ( 1) 00:08:39.285 45.489 - 45.686: 99.9833% ( 1) 00:08:39.285 45.686 - 45.883: 99.9917% ( 1) 00:08:39.285 54.351 - 54.745: 100.0000% ( 1) 00:08:39.285 00:08:39.285 00:08:39.285 real 0m1.196s 00:08:39.285 user 0m1.066s 00:08:39.285 sys 0m0.090s 00:08:39.285 14:09:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.285 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:08:39.285 ************************************ 00:08:39.285 END TEST nvme_overhead 00:08:39.285 ************************************ 00:08:39.285 14:09:40 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:39.285 14:09:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:39.285 14:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.285 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:08:39.285 ************************************ 00:08:39.285 START TEST nvme_arbitration 00:08:39.285 ************************************ 00:08:39.285 14:09:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:42.573 Initializing NVMe Controllers 00:08:42.573 Attached to 0000:00:09.0 00:08:42.573 Attached to 0000:00:06.0 00:08:42.573 Attached to 0000:00:07.0 00:08:42.573 Attached to 0000:00:08.0 00:08:42.573 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:42.573 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:42.573 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:42.573 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:42.573 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:42.573 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:42.573 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:42.573 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:42.573 Initialization complete. Launching workers. 00:08:42.573 Starting thread on core 1 with urgent priority queue 00:08:42.573 Starting thread on core 2 with urgent priority queue 00:08:42.573 Starting thread on core 3 with urgent priority queue 00:08:42.573 Starting thread on core 0 with urgent priority queue 00:08:42.573 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:42.573 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:42.573 QEMU NVMe Ctrl (12340 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:08:42.573 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:08:42.573 QEMU NVMe Ctrl (12341 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:08:42.573 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:08:42.573 ======================================================== 00:08:42.573 00:08:42.573 00:08:42.573 real 0m3.416s 00:08:42.573 user 0m9.567s 00:08:42.573 sys 0m0.109s 00:08:42.573 14:09:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.573 ************************************ 00:08:42.573 END TEST nvme_arbitration 00:08:42.573 ************************************ 00:08:42.573 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.573 14:09:44 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:08:42.573 14:09:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:42.573 14:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.573 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:08:42.835 ************************************ 00:08:42.835 START TEST nvme_single_aen 00:08:42.835 ************************************ 00:08:42.835 14:09:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:08:42.835 [2024-12-04 14:09:44.078699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.835 [2024-12-04 14:09:44.078782] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.835 [2024-12-04 14:09:44.213508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:08:42.835 [2024-12-04 14:09:44.216250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:08:42.835 [2024-12-04 14:09:44.218586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:08:42.835 [2024-12-04 14:09:44.220626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:08:42.835 Asynchronous Event Request test 00:08:42.835 Attached to 0000:00:09.0 00:08:42.835 Attached to 0000:00:06.0 00:08:42.835 Attached to 0000:00:07.0 00:08:42.835 Attached to 0000:00:08.0 00:08:42.835 Reset controller to setup AER completions for this process 00:08:42.835 Registering asynchronous event callbacks... 00:08:42.835 Getting orig temperature thresholds of all controllers 00:08:42.835 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.835 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.835 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.835 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:42.835 Setting all controllers temperature threshold low to trigger AER 00:08:42.835 Waiting for all controllers temperature threshold to be set lower 00:08:42.835 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.835 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:08:42.835 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.835 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:08:42.835 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.835 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:08:42.835 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:42.835 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:08:42.835 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.835 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.836 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.836 Waiting for all controllers to trigger AER and reset threshold 00:08:42.836 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:42.836 Cleaning up... 00:08:42.836 00:08:42.836 real 0m0.215s 00:08:42.836 user 0m0.072s 00:08:42.836 sys 0m0.102s 00:08:42.836 ************************************ 00:08:42.836 14:09:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.836 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:08:42.836 END TEST nvme_single_aen 00:08:42.836 ************************************ 00:08:43.097 14:09:44 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:43.097 14:09:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.097 14:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.097 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:08:43.097 ************************************ 00:08:43.097 START TEST nvme_doorbell_aers 00:08:43.097 ************************************ 00:08:43.097 14:09:44 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:08:43.097 14:09:44 -- nvme/nvme.sh@70 -- # bdfs=() 00:08:43.097 14:09:44 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:43.097 14:09:44 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:43.097 14:09:44 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:43.097 14:09:44 -- common/autotest_common.sh@1508 -- # bdfs=() 00:08:43.097 14:09:44 -- common/autotest_common.sh@1508 -- # local bdfs 00:08:43.097 14:09:44 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.097 14:09:44 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.097 14:09:44 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:08:43.097 14:09:44 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:08:43.097 14:09:44 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:08:43.097 14:09:44 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:43.097 14:09:44 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:08:43.359 [2024-12-04 14:09:44.594976] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:08:53.344 Executing: test_write_invalid_db 00:08:53.344 Waiting for AER completion... 00:08:53.344 Failure: test_write_invalid_db 00:08:53.344 00:08:53.344 Executing: test_invalid_db_write_overflow_sq 00:08:53.344 Waiting for AER completion... 00:08:53.344 Failure: test_invalid_db_write_overflow_sq 00:08:53.344 00:08:53.344 Executing: test_invalid_db_write_overflow_cq 00:08:53.344 Waiting for AER completion... 00:08:53.344 Failure: test_invalid_db_write_overflow_cq 00:08:53.344 00:08:53.344 14:09:54 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:53.344 14:09:54 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:08:53.344 [2024-12-04 14:09:54.638680] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:03.352 Executing: test_write_invalid_db 00:09:03.352 Waiting for AER completion... 00:09:03.352 Failure: test_write_invalid_db 00:09:03.352 00:09:03.352 Executing: test_invalid_db_write_overflow_sq 00:09:03.352 Waiting for AER completion... 00:09:03.352 Failure: test_invalid_db_write_overflow_sq 00:09:03.352 00:09:03.352 Executing: test_invalid_db_write_overflow_cq 00:09:03.352 Waiting for AER completion... 00:09:03.352 Failure: test_invalid_db_write_overflow_cq 00:09:03.352 00:09:03.352 14:10:04 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:03.352 14:10:04 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:03.352 [2024-12-04 14:10:04.653215] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:13.323 Executing: test_write_invalid_db 00:09:13.323 Waiting for AER completion... 00:09:13.323 Failure: test_write_invalid_db 00:09:13.323 00:09:13.323 Executing: test_invalid_db_write_overflow_sq 00:09:13.323 Waiting for AER completion... 00:09:13.323 Failure: test_invalid_db_write_overflow_sq 00:09:13.323 00:09:13.323 Executing: test_invalid_db_write_overflow_cq 00:09:13.323 Waiting for AER completion... 00:09:13.323 Failure: test_invalid_db_write_overflow_cq 00:09:13.323 00:09:13.323 14:10:14 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:13.323 14:10:14 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:09:13.323 [2024-12-04 14:10:14.686326] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 Executing: test_write_invalid_db 00:09:23.294 Waiting for AER completion... 00:09:23.294 Failure: test_write_invalid_db 00:09:23.294 00:09:23.294 Executing: test_invalid_db_write_overflow_sq 00:09:23.294 Waiting for AER completion... 00:09:23.294 Failure: test_invalid_db_write_overflow_sq 00:09:23.294 00:09:23.294 Executing: test_invalid_db_write_overflow_cq 00:09:23.294 Waiting for AER completion... 00:09:23.294 Failure: test_invalid_db_write_overflow_cq 00:09:23.294 00:09:23.294 00:09:23.294 real 0m40.192s 00:09:23.294 user 0m34.019s 00:09:23.294 sys 0m5.789s 00:09:23.294 14:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.294 14:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.294 ************************************ 00:09:23.294 END TEST nvme_doorbell_aers 00:09:23.294 ************************************ 00:09:23.294 14:10:24 -- nvme/nvme.sh@97 -- # uname 00:09:23.294 14:10:24 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:23.294 14:10:24 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:23.294 14:10:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:23.294 14:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.294 14:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.294 ************************************ 00:09:23.294 START TEST nvme_multi_aen 00:09:23.294 ************************************ 00:09:23.294 14:10:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:09:23.294 [2024-12-04 14:10:24.589117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.294 [2024-12-04 14:10:24.589164] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.294 [2024-12-04 14:10:24.711376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:23.294 [2024-12-04 14:10:24.711417] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.711442] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.711452] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.712742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:23.294 [2024-12-04 14:10:24.712769] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.712789] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.712798] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.713812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:23.294 [2024-12-04 14:10:24.713829] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.713843] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.713852] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.714823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:23.294 [2024-12-04 14:10:24.714840] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.714854] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.714862] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63731) is not found. Dropping the request. 00:09:23.294 [2024-12-04 14:10:24.724322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.294 Child process pid: 64248 00:09:23.294 [2024-12-04 14:10:24.724555] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.553 [Child] Asynchronous Event Request test 00:09:23.553 [Child] Attached to 0000:00:09.0 00:09:23.553 [Child] Attached to 0000:00:06.0 00:09:23.553 [Child] Attached to 0000:00:07.0 00:09:23.553 [Child] Attached to 0000:00:08.0 00:09:23.553 [Child] Registering asynchronous event callbacks... 00:09:23.553 [Child] Getting orig temperature thresholds of all controllers 00:09:23.553 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:23.553 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 [Child] Cleaning up... 00:09:23.553 Asynchronous Event Request test 00:09:23.553 Attached to 0000:00:09.0 00:09:23.553 Attached to 0000:00:06.0 00:09:23.553 Attached to 0000:00:07.0 00:09:23.553 Attached to 0000:00:08.0 00:09:23.553 Reset controller to setup AER completions for this process 00:09:23.553 Registering asynchronous event callbacks... 00:09:23.553 Getting orig temperature thresholds of all controllers 00:09:23.553 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:23.553 Setting all controllers temperature threshold low to trigger AER 00:09:23.553 Waiting for all controllers temperature threshold to be set lower 00:09:23.553 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:23.553 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:23.553 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:23.553 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:23.553 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:23.553 Waiting for all controllers to trigger AER and reset threshold 00:09:23.553 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:23.553 Cleaning up... 00:09:23.553 00:09:23.553 real 0m0.404s 00:09:23.553 user 0m0.124s 00:09:23.553 sys 0m0.166s 00:09:23.553 14:10:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.553 14:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.553 ************************************ 00:09:23.553 END TEST nvme_multi_aen 00:09:23.553 ************************************ 00:09:23.553 14:10:24 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:23.553 14:10:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:23.553 14:10:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.553 14:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:23.553 ************************************ 00:09:23.553 START TEST nvme_startup 00:09:23.553 ************************************ 00:09:23.553 14:10:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:23.810 Initializing NVMe Controllers 00:09:23.810 Attached to 0000:00:09.0 00:09:23.810 Attached to 0000:00:06.0 00:09:23.810 Attached to 0000:00:07.0 00:09:23.810 Attached to 0000:00:08.0 00:09:23.810 Initialization complete. 00:09:23.810 Time used:133881.750 (us). 00:09:23.810 00:09:23.810 real 0m0.190s 00:09:23.810 user 0m0.052s 00:09:23.810 sys 0m0.095s 00:09:23.810 14:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.810 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 ************************************ 00:09:23.810 END TEST nvme_startup 00:09:23.810 ************************************ 00:09:23.810 14:10:25 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:23.810 14:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.810 14:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.810 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 ************************************ 00:09:23.810 START TEST nvme_multi_secondary 00:09:23.810 ************************************ 00:09:23.810 14:10:25 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:09:23.810 14:10:25 -- nvme/nvme.sh@52 -- # pid0=64299 00:09:23.810 14:10:25 -- nvme/nvme.sh@54 -- # pid1=64300 00:09:23.810 14:10:25 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:23.810 14:10:25 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:23.810 14:10:25 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:27.991 Initializing NVMe Controllers 00:09:27.991 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:27.991 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:27.991 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:27.991 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:27.991 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:09:27.991 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:09:27.991 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:09:27.991 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:09:27.991 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:09:27.991 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:09:27.991 Initialization complete. Launching workers. 00:09:27.991 ======================================================== 00:09:27.991 Latency(us) 00:09:27.991 Device Information : IOPS MiB/s Average min max 00:09:27.991 PCIE (0000:00:09.0) NSID 1 from core 2: 3224.61 12.60 4960.66 770.89 12711.83 00:09:27.991 PCIE (0000:00:06.0) NSID 1 from core 2: 3224.61 12.60 4960.34 749.20 12529.64 00:09:27.991 PCIE (0000:00:07.0) NSID 1 from core 2: 3224.61 12.60 4961.41 765.06 15630.03 00:09:27.991 PCIE (0000:00:08.0) NSID 1 from core 2: 3224.61 12.60 4962.15 778.37 13572.16 00:09:27.991 PCIE (0000:00:08.0) NSID 2 from core 2: 3224.61 12.60 4962.13 770.21 12839.61 00:09:27.991 PCIE (0000:00:08.0) NSID 3 from core 2: 3224.61 12.60 4962.10 765.82 12890.55 00:09:27.991 ======================================================== 00:09:27.991 Total : 19347.65 75.58 4961.47 749.20 15630.03 00:09:27.991 00:09:27.991 14:10:28 -- nvme/nvme.sh@56 -- # wait 64299 00:09:27.991 Initializing NVMe Controllers 00:09:27.991 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:27.991 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:27.991 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:27.991 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:27.991 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:09:27.991 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:09:27.991 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:09:27.991 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:09:27.991 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:09:27.991 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:09:27.991 Initialization complete. Launching workers. 00:09:27.991 ======================================================== 00:09:27.991 Latency(us) 00:09:27.991 Device Information : IOPS MiB/s Average min max 00:09:27.991 PCIE (0000:00:09.0) NSID 1 from core 1: 7615.89 29.75 2100.45 1002.31 6067.93 00:09:27.991 PCIE (0000:00:06.0) NSID 1 from core 1: 7615.89 29.75 2099.54 1015.30 5573.45 00:09:27.991 PCIE (0000:00:07.0) NSID 1 from core 1: 7615.89 29.75 2100.42 962.19 5541.52 00:09:27.991 PCIE (0000:00:08.0) NSID 1 from core 1: 7615.89 29.75 2100.49 1002.10 5985.19 00:09:27.991 PCIE (0000:00:08.0) NSID 2 from core 1: 7615.89 29.75 2100.45 1034.15 6176.95 00:09:27.991 PCIE (0000:00:08.0) NSID 3 from core 1: 7615.89 29.75 2100.44 988.54 6310.78 00:09:27.991 ======================================================== 00:09:27.991 Total : 45695.33 178.50 2100.30 962.19 6310.78 00:09:27.991 00:09:29.367 Initializing NVMe Controllers 00:09:29.367 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:29.367 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:29.367 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:29.367 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:29.367 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:29.367 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:29.367 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:29.367 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:29.367 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:29.367 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:29.367 Initialization complete. Launching workers. 00:09:29.367 ======================================================== 00:09:29.367 Latency(us) 00:09:29.367 Device Information : IOPS MiB/s Average min max 00:09:29.367 PCIE (0000:00:09.0) NSID 1 from core 0: 11082.93 43.29 1443.29 698.10 5651.04 00:09:29.367 PCIE (0000:00:06.0) NSID 1 from core 0: 11082.93 43.29 1442.46 687.46 6097.05 00:09:29.367 PCIE (0000:00:07.0) NSID 1 from core 0: 11082.93 43.29 1443.25 681.62 5809.57 00:09:29.367 PCIE (0000:00:08.0) NSID 1 from core 0: 11082.93 43.29 1443.23 686.65 5544.75 00:09:29.367 PCIE (0000:00:08.0) NSID 2 from core 0: 11082.93 43.29 1443.21 648.16 5693.18 00:09:29.367 PCIE (0000:00:08.0) NSID 3 from core 0: 11082.93 43.29 1443.19 634.92 5809.50 00:09:29.367 ======================================================== 00:09:29.367 Total : 66497.60 259.76 1443.11 634.92 6097.05 00:09:29.367 00:09:29.367 14:10:30 -- nvme/nvme.sh@57 -- # wait 64300 00:09:29.367 14:10:30 -- nvme/nvme.sh@61 -- # pid0=64370 00:09:29.367 14:10:30 -- nvme/nvme.sh@63 -- # pid1=64371 00:09:29.367 14:10:30 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:29.367 14:10:30 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:29.367 14:10:30 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:32.650 Initializing NVMe Controllers 00:09:32.650 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:32.650 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:32.650 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:32.650 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:32.650 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:09:32.650 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:09:32.650 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:09:32.650 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:09:32.650 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:09:32.650 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:09:32.650 Initialization complete. Launching workers. 00:09:32.650 ======================================================== 00:09:32.650 Latency(us) 00:09:32.650 Device Information : IOPS MiB/s Average min max 00:09:32.650 PCIE (0000:00:09.0) NSID 1 from core 1: 7690.00 30.04 2080.22 721.65 5657.93 00:09:32.650 PCIE (0000:00:06.0) NSID 1 from core 1: 7690.00 30.04 2079.56 697.93 5821.71 00:09:32.650 PCIE (0000:00:07.0) NSID 1 from core 1: 7690.00 30.04 2080.42 720.27 5629.43 00:09:32.650 PCIE (0000:00:08.0) NSID 1 from core 1: 7690.00 30.04 2080.38 721.92 6081.28 00:09:32.650 PCIE (0000:00:08.0) NSID 2 from core 1: 7690.00 30.04 2080.54 724.91 5800.58 00:09:32.650 PCIE (0000:00:08.0) NSID 3 from core 1: 7690.00 30.04 2080.51 720.89 5673.29 00:09:32.650 ======================================================== 00:09:32.650 Total : 46140.00 180.23 2080.27 697.93 6081.28 00:09:32.650 00:09:32.650 Initializing NVMe Controllers 00:09:32.650 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:32.650 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:32.650 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:32.650 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:32.650 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:32.650 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:32.650 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:32.650 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:32.650 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:32.650 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:32.650 Initialization complete. Launching workers. 00:09:32.650 ======================================================== 00:09:32.650 Latency(us) 00:09:32.650 Device Information : IOPS MiB/s Average min max 00:09:32.650 PCIE (0000:00:09.0) NSID 1 from core 0: 7443.89 29.08 2149.00 758.47 6199.15 00:09:32.650 PCIE (0000:00:06.0) NSID 1 from core 0: 7443.89 29.08 2148.13 741.49 6234.62 00:09:32.650 PCIE (0000:00:07.0) NSID 1 from core 0: 7443.89 29.08 2149.08 754.89 6318.00 00:09:32.651 PCIE (0000:00:08.0) NSID 1 from core 0: 7443.89 29.08 2149.03 751.23 6373.68 00:09:32.651 PCIE (0000:00:08.0) NSID 2 from core 0: 7443.89 29.08 2149.00 740.20 6318.62 00:09:32.651 PCIE (0000:00:08.0) NSID 3 from core 0: 7443.89 29.08 2148.97 743.22 6309.84 00:09:32.651 ======================================================== 00:09:32.651 Total : 44663.32 174.47 2148.87 740.20 6373.68 00:09:32.651 00:09:35.183 Initializing NVMe Controllers 00:09:35.183 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:35.183 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:35.183 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:35.183 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:35.183 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:09:35.183 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:09:35.183 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:09:35.183 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:09:35.183 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:09:35.183 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:09:35.183 Initialization complete. Launching workers. 00:09:35.183 ======================================================== 00:09:35.183 Latency(us) 00:09:35.183 Device Information : IOPS MiB/s Average min max 00:09:35.183 PCIE (0000:00:09.0) NSID 1 from core 2: 4525.67 17.68 3534.74 735.89 13274.26 00:09:35.183 PCIE (0000:00:06.0) NSID 1 from core 2: 4525.67 17.68 3533.50 719.63 12992.19 00:09:35.183 PCIE (0000:00:07.0) NSID 1 from core 2: 4525.67 17.68 3535.28 745.89 12626.09 00:09:35.183 PCIE (0000:00:08.0) NSID 1 from core 2: 4525.67 17.68 3535.18 748.62 12306.92 00:09:35.183 PCIE (0000:00:08.0) NSID 2 from core 2: 4525.67 17.68 3535.25 748.51 12926.24 00:09:35.183 PCIE (0000:00:08.0) NSID 3 from core 2: 4525.67 17.68 3535.16 741.73 12789.48 00:09:35.183 ======================================================== 00:09:35.183 Total : 27154.03 106.07 3534.85 719.63 13274.26 00:09:35.183 00:09:35.183 14:10:36 -- nvme/nvme.sh@65 -- # wait 64370 00:09:35.183 14:10:36 -- nvme/nvme.sh@66 -- # wait 64371 00:09:35.183 00:09:35.183 real 0m10.928s 00:09:35.183 user 0m18.635s 00:09:35.183 sys 0m0.628s 00:09:35.183 14:10:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.183 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:09:35.183 ************************************ 00:09:35.183 END TEST nvme_multi_secondary 00:09:35.183 ************************************ 00:09:35.183 14:10:36 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:35.183 14:10:36 -- nvme/nvme.sh@102 -- # kill_stub 00:09:35.183 14:10:36 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63312 ]] 00:09:35.183 14:10:36 -- common/autotest_common.sh@1076 -- # kill 63312 00:09:35.183 14:10:36 -- common/autotest_common.sh@1077 -- # wait 63312 00:09:35.756 [2024-12-04 14:10:36.948225] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:35.756 [2024-12-04 14:10:36.948275] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:35.756 [2024-12-04 14:10:36.948286] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:35.756 [2024-12-04 14:10:36.948297] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:36.724 [2024-12-04 14:10:37.955787] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:36.724 [2024-12-04 14:10:37.955857] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:36.724 [2024-12-04 14:10:37.955869] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:36.724 [2024-12-04 14:10:37.955880] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:37.296 [2024-12-04 14:10:38.483230] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:37.296 [2024-12-04 14:10:38.483301] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:37.296 [2024-12-04 14:10:38.483313] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:37.296 [2024-12-04 14:10:38.483324] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:39.210 [2024-12-04 14:10:40.473194] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:39.210 [2024-12-04 14:10:40.473277] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:39.210 [2024-12-04 14:10:40.473290] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:39.210 [2024-12-04 14:10:40.473305] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64247) is not found. Dropping the request. 00:09:39.210 14:10:40 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:09:39.472 14:10:40 -- common/autotest_common.sh@1083 -- # echo 2 00:09:39.472 14:10:40 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:39.472 14:10:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:39.472 14:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.472 14:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:39.472 ************************************ 00:09:39.472 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:39.472 ************************************ 00:09:39.472 14:10:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:39.472 * Looking for test storage... 00:09:39.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.472 14:10:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:39.472 14:10:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:39.472 14:10:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:39.472 14:10:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:39.472 14:10:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:39.472 14:10:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:39.472 14:10:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:39.472 14:10:40 -- scripts/common.sh@335 -- # IFS=.-: 00:09:39.472 14:10:40 -- scripts/common.sh@335 -- # read -ra ver1 00:09:39.472 14:10:40 -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.472 14:10:40 -- scripts/common.sh@336 -- # read -ra ver2 00:09:39.472 14:10:40 -- scripts/common.sh@337 -- # local 'op=<' 00:09:39.472 14:10:40 -- scripts/common.sh@339 -- # ver1_l=2 00:09:39.472 14:10:40 -- scripts/common.sh@340 -- # ver2_l=1 00:09:39.472 14:10:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:39.472 14:10:40 -- scripts/common.sh@343 -- # case "$op" in 00:09:39.472 14:10:40 -- scripts/common.sh@344 -- # : 1 00:09:39.472 14:10:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:39.472 14:10:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.472 14:10:40 -- scripts/common.sh@364 -- # decimal 1 00:09:39.472 14:10:40 -- scripts/common.sh@352 -- # local d=1 00:09:39.472 14:10:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.472 14:10:40 -- scripts/common.sh@354 -- # echo 1 00:09:39.472 14:10:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:39.472 14:10:40 -- scripts/common.sh@365 -- # decimal 2 00:09:39.472 14:10:40 -- scripts/common.sh@352 -- # local d=2 00:09:39.472 14:10:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.472 14:10:40 -- scripts/common.sh@354 -- # echo 2 00:09:39.472 14:10:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:39.472 14:10:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:39.472 14:10:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:39.472 14:10:40 -- scripts/common.sh@367 -- # return 0 00:09:39.472 14:10:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.472 14:10:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.472 --rc genhtml_branch_coverage=1 00:09:39.472 --rc genhtml_function_coverage=1 00:09:39.472 --rc genhtml_legend=1 00:09:39.472 --rc geninfo_all_blocks=1 00:09:39.472 --rc geninfo_unexecuted_blocks=1 00:09:39.472 00:09:39.472 ' 00:09:39.472 14:10:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.472 --rc genhtml_branch_coverage=1 00:09:39.472 --rc genhtml_function_coverage=1 00:09:39.472 --rc genhtml_legend=1 00:09:39.472 --rc geninfo_all_blocks=1 00:09:39.472 --rc geninfo_unexecuted_blocks=1 00:09:39.472 00:09:39.472 ' 00:09:39.472 14:10:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:39.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.472 --rc genhtml_branch_coverage=1 00:09:39.472 --rc genhtml_function_coverage=1 00:09:39.472 --rc genhtml_legend=1 00:09:39.472 --rc geninfo_all_blocks=1 00:09:39.472 --rc geninfo_unexecuted_blocks=1 00:09:39.472 00:09:39.472 ' 00:09:39.472 14:10:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:39.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.473 --rc genhtml_branch_coverage=1 00:09:39.473 --rc genhtml_function_coverage=1 00:09:39.473 --rc genhtml_legend=1 00:09:39.473 --rc geninfo_all_blocks=1 00:09:39.473 --rc geninfo_unexecuted_blocks=1 00:09:39.473 00:09:39.473 ' 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:39.473 14:10:40 -- common/autotest_common.sh@1519 -- # bdfs=() 00:09:39.473 14:10:40 -- common/autotest_common.sh@1519 -- # local bdfs 00:09:39.473 14:10:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.473 14:10:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:39.473 14:10:40 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:39.473 14:10:40 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:39.473 14:10:40 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.473 14:10:40 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.473 14:10:40 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:39.473 14:10:40 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:39.473 14:10:40 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:39.473 14:10:40 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64576 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64576 00:09:39.473 14:10:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:39.473 14:10:40 -- common/autotest_common.sh@829 -- # '[' -z 64576 ']' 00:09:39.473 14:10:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.473 14:10:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.473 14:10:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.473 14:10:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.473 14:10:40 -- common/autotest_common.sh@10 -- # set +x 00:09:39.734 [2024-12-04 14:10:40.984741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:39.734 [2024-12-04 14:10:40.984880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64576 ] 00:09:39.734 [2024-12-04 14:10:41.148381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.995 [2024-12-04 14:10:41.377879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.996 [2024-12-04 14:10:41.378360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.996 [2024-12-04 14:10:41.378770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.996 [2024-12-04 14:10:41.379121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.996 [2024-12-04 14:10:41.379163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.408 14:10:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.408 14:10:42 -- common/autotest_common.sh@862 -- # return 0 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:09:41.408 14:10:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.408 14:10:42 -- common/autotest_common.sh@10 -- # set +x 00:09:41.408 nvme0n1 00:09:41.408 14:10:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_4qYrl.txt 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:41.408 14:10:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.408 14:10:42 -- common/autotest_common.sh@10 -- # set +x 00:09:41.408 true 00:09:41.408 14:10:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733321442 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64612 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:41.408 14:10:42 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:43.309 14:10:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.309 14:10:44 -- common/autotest_common.sh@10 -- # set +x 00:09:43.309 [2024-12-04 14:10:44.567595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:43.309 [2024-12-04 14:10:44.567790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:43.309 [2024-12-04 14:10:44.567808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:43.309 [2024-12-04 14:10:44.567819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:43.309 [2024-12-04 14:10:44.569368] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:43.309 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64612 00:09:43.309 14:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64612 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64612 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:43.309 14:10:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.309 14:10:44 -- common/autotest_common.sh@10 -- # set +x 00:09:43.309 14:10:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_4qYrl.txt 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_4qYrl.txt 00:09:43.309 14:10:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64576 00:09:43.309 14:10:44 -- common/autotest_common.sh@936 -- # '[' -z 64576 ']' 00:09:43.309 14:10:44 -- common/autotest_common.sh@940 -- # kill -0 64576 00:09:43.309 14:10:44 -- common/autotest_common.sh@941 -- # uname 00:09:43.309 14:10:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.309 14:10:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64576 00:09:43.309 killing process with pid 64576 00:09:43.309 14:10:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:43.309 14:10:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:43.309 14:10:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64576' 00:09:43.309 14:10:44 -- common/autotest_common.sh@955 -- # kill 64576 00:09:43.309 14:10:44 -- common/autotest_common.sh@960 -- # wait 64576 00:09:44.684 14:10:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:44.684 14:10:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:44.684 ************************************ 00:09:44.684 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:44.684 ************************************ 00:09:44.684 00:09:44.684 real 0m5.155s 00:09:44.684 user 0m18.016s 00:09:44.684 sys 0m0.539s 00:09:44.684 14:10:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.684 14:10:45 -- common/autotest_common.sh@10 -- # set +x 00:09:44.684 14:10:45 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:44.684 14:10:45 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:44.684 14:10:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:44.684 14:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.684 14:10:45 -- common/autotest_common.sh@10 -- # set +x 00:09:44.684 ************************************ 00:09:44.684 START TEST nvme_fio 00:09:44.684 ************************************ 00:09:44.684 14:10:45 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:09:44.684 14:10:45 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:44.684 14:10:45 -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:44.684 14:10:45 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:44.684 14:10:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:44.684 14:10:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:44.684 14:10:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:44.684 14:10:45 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:44.684 14:10:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:44.684 14:10:45 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:44.684 14:10:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:44.684 14:10:45 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:09:44.684 14:10:45 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:44.684 14:10:45 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:44.684 14:10:45 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:44.684 14:10:45 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:44.944 14:10:46 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:44.945 14:10:46 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:44.945 14:10:46 -- nvme/nvme.sh@41 -- # bs=4096 00:09:44.945 14:10:46 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:09:44.945 14:10:46 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:09:44.945 14:10:46 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:09:44.945 14:10:46 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:44.945 14:10:46 -- common/autotest_common.sh@1328 -- # local sanitizers 00:09:44.945 14:10:46 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.945 14:10:46 -- common/autotest_common.sh@1330 -- # shift 00:09:44.945 14:10:46 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:09:44.945 14:10:46 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:09:44.945 14:10:46 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.945 14:10:46 -- common/autotest_common.sh@1334 -- # grep libasan 00:09:44.945 14:10:46 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:09:44.945 14:10:46 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:44.945 14:10:46 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:44.945 14:10:46 -- common/autotest_common.sh@1336 -- # break 00:09:44.945 14:10:46 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:44.945 14:10:46 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:09:45.205 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:45.205 fio-3.35 00:09:45.205 Starting 1 thread 00:09:50.493 00:09:50.493 test: (groupid=0, jobs=1): err= 0: pid=64749: Wed Dec 4 14:10:51 2024 00:09:50.494 read: IOPS=21.5k, BW=84.0MiB/s (88.1MB/s)(168MiB/2001msec) 00:09:50.494 slat (nsec): min=3320, max=74844, avg=5105.41, stdev=2485.59 00:09:50.494 clat (usec): min=286, max=8115, avg=2900.56, stdev=966.63 00:09:50.494 lat (usec): min=292, max=8129, avg=2905.67, stdev=967.88 00:09:50.494 clat percentiles (usec): 00:09:50.494 | 1.00th=[ 1385], 5.00th=[ 1975], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:50.494 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2737], 00:09:50.494 | 70.00th=[ 2900], 80.00th=[ 3261], 90.00th=[ 4293], 95.00th=[ 5145], 00:09:50.494 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 7635], 00:09:50.494 | 99.99th=[ 7963] 00:09:50.494 bw ( KiB/s): min=86384, max=89200, per=100.00%, avg=87866.67, stdev=1413.93, samples=3 00:09:50.494 iops : min=21596, max=22300, avg=21966.67, stdev=353.48, samples=3 00:09:50.494 write: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets 00:09:50.494 slat (nsec): min=3386, max=81555, avg=5249.02, stdev=2477.53 00:09:50.494 clat (usec): min=312, max=17101, avg=3046.39, stdev=1362.93 00:09:50.494 lat (usec): min=318, max=17106, avg=3051.64, stdev=1363.81 00:09:50.494 clat percentiles (usec): 00:09:50.494 | 1.00th=[ 1434], 5.00th=[ 2040], 10.00th=[ 2212], 20.00th=[ 2376], 00:09:50.494 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:50.494 | 70.00th=[ 2966], 80.00th=[ 3392], 90.00th=[ 4555], 95.00th=[ 5407], 00:09:50.494 | 99.00th=[ 7701], 99.50th=[11994], 99.90th=[15664], 99.95th=[16319], 00:09:50.494 | 99.99th=[16581] 00:09:50.494 bw ( KiB/s): min=86768, max=89024, per=100.00%, avg=88064.00, stdev=1164.93, samples=3 00:09:50.494 iops : min=21692, max=22256, avg=22016.00, stdev=291.23, samples=3 00:09:50.494 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.11% 00:09:50.494 lat (msec) : 2=4.85%, 4=81.97%, 10=12.65%, 20=0.40% 00:09:50.494 cpu : usr=99.10%, sys=0.05%, ctx=4, majf=0, minf=608 00:09:50.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.494 issued rwts: total=43051,42734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.494 00:09:50.494 Run status group 0 (all jobs): 00:09:50.494 READ: bw=84.0MiB/s (88.1MB/s), 84.0MiB/s-84.0MiB/s (88.1MB/s-88.1MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:50.494 WRITE: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:50.494 ----------------------------------------------------- 00:09:50.494 Suppressions used: 00:09:50.494 count bytes template 00:09:50.494 1 32 /usr/src/fio/parse.c 00:09:50.494 1 8 libtcmalloc_minimal.so 00:09:50.494 ----------------------------------------------------- 00:09:50.494 00:09:50.755 14:10:51 -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.755 14:10:51 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:50.755 14:10:51 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:50.755 14:10:51 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:50.755 14:10:52 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:09:50.755 14:10:52 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:51.016 14:10:52 -- nvme/nvme.sh@41 -- # bs=4096 00:09:51.016 14:10:52 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:09:51.016 14:10:52 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:09:51.016 14:10:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:09:51.016 14:10:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.016 14:10:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:09:51.016 14:10:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.016 14:10:52 -- common/autotest_common.sh@1330 -- # shift 00:09:51.016 14:10:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:09:51.016 14:10:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.016 14:10:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:51.016 14:10:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:09:51.016 14:10:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:09:51.016 14:10:52 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:51.016 14:10:52 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:51.016 14:10:52 -- common/autotest_common.sh@1336 -- # break 00:09:51.016 14:10:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:51.016 14:10:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:09:51.277 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:51.277 fio-3.35 00:09:51.277 Starting 1 thread 00:09:57.855 00:09:57.855 test: (groupid=0, jobs=1): err= 0: pid=64804: Wed Dec 4 14:10:58 2024 00:09:57.855 read: IOPS=21.2k, BW=82.8MiB/s (86.9MB/s)(166MiB/2001msec) 00:09:57.855 slat (nsec): min=3326, max=70303, avg=5183.55, stdev=2741.25 00:09:57.855 clat (usec): min=664, max=12658, avg=3012.88, stdev=1054.24 00:09:57.855 lat (usec): min=700, max=12727, avg=3018.06, stdev=1055.78 00:09:57.855 clat percentiles (usec): 00:09:57.855 | 1.00th=[ 1549], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2409], 00:09:57.855 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2802], 00:09:57.855 | 70.00th=[ 2966], 80.00th=[ 3294], 90.00th=[ 4555], 95.00th=[ 5473], 00:09:57.855 | 99.00th=[ 6718], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 9372], 00:09:57.855 | 99.99th=[12125] 00:09:57.855 bw ( KiB/s): min=80247, max=90640, per=100.00%, avg=85258.33, stdev=5206.39, samples=3 00:09:57.855 iops : min=20061, max=22660, avg=21314.33, stdev=1301.96, samples=3 00:09:57.855 write: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(165MiB/2001msec); 0 zone resets 00:09:57.855 slat (usec): min=3, max=125, avg= 5.33, stdev= 2.74 00:09:57.855 clat (usec): min=761, max=12390, avg=3019.86, stdev=1042.80 00:09:57.855 lat (usec): min=765, max=12403, avg=3025.19, stdev=1044.31 00:09:57.855 clat percentiles (usec): 00:09:57.855 | 1.00th=[ 1598], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:57.855 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2802], 00:09:57.855 | 70.00th=[ 2966], 80.00th=[ 3326], 90.00th=[ 4555], 95.00th=[ 5473], 00:09:57.855 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[ 9634], 00:09:57.855 | 99.99th=[11731] 00:09:57.855 bw ( KiB/s): min=80127, max=90520, per=100.00%, avg=85429.00, stdev=5199.71, samples=3 00:09:57.855 iops : min=20031, max=22630, avg=21357.00, stdev=1300.31, samples=3 00:09:57.855 lat (usec) : 750=0.01%, 1000=0.11% 00:09:57.855 lat (msec) : 2=2.76%, 4=83.60%, 10=13.48%, 20=0.04% 00:09:57.855 cpu : usr=99.05%, sys=0.10%, ctx=12, majf=0, minf=608 00:09:57.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:57.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.855 issued rwts: total=42440,42143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.855 00:09:57.855 Run status group 0 (all jobs): 00:09:57.855 READ: bw=82.8MiB/s (86.9MB/s), 82.8MiB/s-82.8MiB/s (86.9MB/s-86.9MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:57.855 WRITE: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:57.855 ----------------------------------------------------- 00:09:57.855 Suppressions used: 00:09:57.855 count bytes template 00:09:57.855 1 32 /usr/src/fio/parse.c 00:09:57.855 1 8 libtcmalloc_minimal.so 00:09:57.855 ----------------------------------------------------- 00:09:57.855 00:09:57.855 14:10:58 -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:57.855 14:10:58 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:57.855 14:10:58 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:57.855 14:10:58 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:57.855 14:10:59 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:09:57.855 14:10:59 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:58.116 14:10:59 -- nvme/nvme.sh@41 -- # bs=4096 00:09:58.116 14:10:59 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:09:58.116 14:10:59 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:09:58.116 14:10:59 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:09:58.116 14:10:59 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:58.116 14:10:59 -- common/autotest_common.sh@1328 -- # local sanitizers 00:09:58.116 14:10:59 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:58.116 14:10:59 -- common/autotest_common.sh@1330 -- # shift 00:09:58.116 14:10:59 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:09:58.116 14:10:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:09:58.116 14:10:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:58.116 14:10:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:09:58.116 14:10:59 -- common/autotest_common.sh@1334 -- # grep libasan 00:09:58.116 14:10:59 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:58.116 14:10:59 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:58.116 14:10:59 -- common/autotest_common.sh@1336 -- # break 00:09:58.116 14:10:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:58.116 14:10:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:09:58.116 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:58.116 fio-3.35 00:09:58.116 Starting 1 thread 00:10:01.432 00:10:01.432 test: (groupid=0, jobs=1): err= 0: pid=64885: Wed Dec 4 14:11:02 2024 00:10:01.432 read: IOPS=10.9k, BW=42.5MiB/s (44.6MB/s)(86.7MiB/2038msec) 00:10:01.432 slat (nsec): min=4163, max=63225, avg=5335.32, stdev=2812.89 00:10:01.432 clat (usec): min=960, max=41913, avg=4070.68, stdev=2746.90 00:10:01.432 lat (usec): min=964, max=41917, avg=4076.01, stdev=2747.33 00:10:01.432 clat percentiles (usec): 00:10:01.432 | 1.00th=[ 1729], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:10:01.432 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3097], 00:10:01.432 | 70.00th=[ 4015], 80.00th=[ 5604], 90.00th=[ 7701], 95.00th=[ 9372], 00:10:01.432 | 99.00th=[12256], 99.50th=[13042], 99.90th=[39584], 99.95th=[40633], 00:10:01.432 | 99.99th=[41157] 00:10:01.432 bw ( KiB/s): min=18240, max=82672, per=100.00%, avg=44314.00, stdev=31046.96, samples=4 00:10:01.432 iops : min= 4560, max=20668, avg=11078.50, stdev=7761.74, samples=4 00:10:01.432 write: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(86.4MiB/2038msec); 0 zone resets 00:10:01.432 slat (nsec): min=4234, max=71730, avg=5411.55, stdev=2850.13 00:10:01.432 clat (usec): min=974, max=88043, avg=7669.44, stdev=12916.19 00:10:01.432 lat (usec): min=979, max=88055, avg=7674.85, stdev=12916.43 00:10:01.432 clat percentiles (usec): 00:10:01.432 | 1.00th=[ 1893], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:10:01.432 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 3130], 00:10:01.432 | 70.00th=[ 4293], 80.00th=[ 6456], 90.00th=[11338], 95.00th=[47973], 00:10:01.432 | 99.00th=[55313], 99.50th=[58459], 99.90th=[71828], 99.95th=[77071], 00:10:01.432 | 99.99th=[85459] 00:10:01.432 bw ( KiB/s): min=17392, max=82648, per=100.00%, avg=44072.00, stdev=31073.58, samples=4 00:10:01.432 iops : min= 4348, max=20662, avg=11018.00, stdev=7768.39, samples=4 00:10:01.432 lat (usec) : 1000=0.01% 00:10:01.432 lat (msec) : 2=1.53%, 4=67.36%, 10=23.58%, 20=3.10%, 50=2.41% 00:10:01.432 lat (msec) : 100=2.01% 00:10:01.432 cpu : usr=99.07%, sys=0.20%, ctx=9, majf=0, minf=608 00:10:01.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:01.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.432 issued rwts: total=22192,22128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.432 00:10:01.432 Run status group 0 (all jobs): 00:10:01.432 READ: bw=42.5MiB/s (44.6MB/s), 42.5MiB/s-42.5MiB/s (44.6MB/s-44.6MB/s), io=86.7MiB (90.9MB), run=2038-2038msec 00:10:01.432 WRITE: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=86.4MiB (90.6MB), run=2038-2038msec 00:10:01.432 ----------------------------------------------------- 00:10:01.432 Suppressions used: 00:10:01.432 count bytes template 00:10:01.432 1 32 /usr/src/fio/parse.c 00:10:01.432 1 8 libtcmalloc_minimal.so 00:10:01.432 ----------------------------------------------------- 00:10:01.432 00:10:01.432 14:11:02 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:01.432 14:11:02 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:01.432 14:11:02 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:01.432 14:11:02 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:01.432 14:11:02 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:01.432 14:11:02 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:01.432 14:11:02 -- nvme/nvme.sh@41 -- # bs=4096 00:10:01.432 14:11:02 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:01.432 14:11:02 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:01.432 14:11:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:10:01.432 14:11:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:01.432 14:11:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:10:01.432 14:11:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:01.432 14:11:02 -- common/autotest_common.sh@1330 -- # shift 00:10:01.432 14:11:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:10:01.432 14:11:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:10:01.432 14:11:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:01.432 14:11:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:10:01.432 14:11:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:10:01.432 14:11:02 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:01.432 14:11:02 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:01.432 14:11:02 -- common/autotest_common.sh@1336 -- # break 00:10:01.432 14:11:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:01.432 14:11:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:10:01.694 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:01.694 fio-3.35 00:10:01.694 Starting 1 thread 00:10:11.696 00:10:11.696 test: (groupid=0, jobs=1): err= 0: pid=64946: Wed Dec 4 14:11:11 2024 00:10:11.696 read: IOPS=20.5k, BW=79.9MiB/s (83.8MB/s)(160MiB/2001msec) 00:10:11.696 slat (usec): min=4, max=125, avg= 5.33, stdev= 2.63 00:10:11.696 clat (usec): min=283, max=8445, avg=3109.15, stdev=970.12 00:10:11.696 lat (usec): min=287, max=8450, avg=3114.48, stdev=971.37 00:10:11.696 clat percentiles (usec): 00:10:11.696 | 1.00th=[ 1811], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:10:11.696 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:10:11.696 | 70.00th=[ 3097], 80.00th=[ 3589], 90.00th=[ 4555], 95.00th=[ 5276], 00:10:11.696 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7635], 99.95th=[ 7963], 00:10:11.696 | 99.99th=[ 8291] 00:10:11.696 bw ( KiB/s): min=73660, max=83456, per=97.54%, avg=79790.67, stdev=5343.11, samples=3 00:10:11.696 iops : min=18415, max=20864, avg=19947.67, stdev=1335.78, samples=3 00:10:11.696 write: IOPS=20.4k, BW=79.7MiB/s (83.6MB/s)(159MiB/2001msec); 0 zone resets 00:10:11.696 slat (nsec): min=4225, max=71246, avg=5404.28, stdev=2619.73 00:10:11.696 clat (usec): min=406, max=8816, avg=3131.43, stdev=967.31 00:10:11.696 lat (usec): min=411, max=8821, avg=3136.84, stdev=968.57 00:10:11.696 clat percentiles (usec): 00:10:11.696 | 1.00th=[ 1827], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:10:11.696 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2933], 00:10:11.696 | 70.00th=[ 3130], 80.00th=[ 3621], 90.00th=[ 4555], 95.00th=[ 5276], 00:10:11.696 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 7963], 00:10:11.696 | 99.99th=[ 8291] 00:10:11.696 bw ( KiB/s): min=73668, max=83352, per=97.80%, avg=79806.67, stdev=5337.51, samples=3 00:10:11.696 iops : min=18417, max=20838, avg=19951.67, stdev=1334.38, samples=3 00:10:11.696 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.01% 00:10:11.696 lat (msec) : 2=1.40%, 4=83.35%, 10=15.20% 00:10:11.696 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=607 00:10:11.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:11.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.696 issued rwts: total=40923,40823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.696 00:10:11.696 Run status group 0 (all jobs): 00:10:11.696 READ: bw=79.9MiB/s (83.8MB/s), 79.9MiB/s-79.9MiB/s (83.8MB/s-83.8MB/s), io=160MiB (168MB), run=2001-2001msec 00:10:11.696 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=159MiB (167MB), run=2001-2001msec 00:10:11.696 ----------------------------------------------------- 00:10:11.696 Suppressions used: 00:10:11.696 count bytes template 00:10:11.696 1 32 /usr/src/fio/parse.c 00:10:11.696 1 8 libtcmalloc_minimal.so 00:10:11.696 ----------------------------------------------------- 00:10:11.696 00:10:11.696 ************************************ 00:10:11.696 END TEST nvme_fio 00:10:11.696 ************************************ 00:10:11.696 14:11:11 -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:11.696 14:11:11 -- nvme/nvme.sh@46 -- # true 00:10:11.696 00:10:11.696 real 0m25.953s 00:10:11.696 user 0m14.969s 00:10:11.696 sys 0m19.287s 00:10:11.696 14:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.696 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:11.696 00:10:11.696 real 1m41.333s 00:10:11.696 user 3m39.955s 00:10:11.696 sys 0m29.801s 00:10:11.696 ************************************ 00:10:11.696 END TEST nvme 00:10:11.696 ************************************ 00:10:11.696 14:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:11.696 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:11.696 14:11:11 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:10:11.696 14:11:11 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:11.696 14:11:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:11.696 14:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.696 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:11.696 ************************************ 00:10:11.696 START TEST nvme_scc 00:10:11.696 ************************************ 00:10:11.696 14:11:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:11.696 * Looking for test storage... 00:10:11.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:11.696 14:11:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:11.696 14:11:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:11.696 14:11:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:11.696 14:11:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:11.696 14:11:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:11.696 14:11:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:11.696 14:11:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:11.696 14:11:12 -- scripts/common.sh@335 -- # IFS=.-: 00:10:11.696 14:11:12 -- scripts/common.sh@335 -- # read -ra ver1 00:10:11.696 14:11:12 -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.696 14:11:12 -- scripts/common.sh@336 -- # read -ra ver2 00:10:11.696 14:11:12 -- scripts/common.sh@337 -- # local 'op=<' 00:10:11.696 14:11:12 -- scripts/common.sh@339 -- # ver1_l=2 00:10:11.696 14:11:12 -- scripts/common.sh@340 -- # ver2_l=1 00:10:11.696 14:11:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:11.696 14:11:12 -- scripts/common.sh@343 -- # case "$op" in 00:10:11.696 14:11:12 -- scripts/common.sh@344 -- # : 1 00:10:11.696 14:11:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:11.696 14:11:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.696 14:11:12 -- scripts/common.sh@364 -- # decimal 1 00:10:11.696 14:11:12 -- scripts/common.sh@352 -- # local d=1 00:10:11.696 14:11:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.696 14:11:12 -- scripts/common.sh@354 -- # echo 1 00:10:11.696 14:11:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:11.696 14:11:12 -- scripts/common.sh@365 -- # decimal 2 00:10:11.696 14:11:12 -- scripts/common.sh@352 -- # local d=2 00:10:11.696 14:11:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.696 14:11:12 -- scripts/common.sh@354 -- # echo 2 00:10:11.696 14:11:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:11.696 14:11:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:11.696 14:11:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:11.696 14:11:12 -- scripts/common.sh@367 -- # return 0 00:10:11.696 14:11:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.696 14:11:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.696 --rc genhtml_branch_coverage=1 00:10:11.696 --rc genhtml_function_coverage=1 00:10:11.696 --rc genhtml_legend=1 00:10:11.696 --rc geninfo_all_blocks=1 00:10:11.696 --rc geninfo_unexecuted_blocks=1 00:10:11.696 00:10:11.696 ' 00:10:11.696 14:11:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.696 --rc genhtml_branch_coverage=1 00:10:11.696 --rc genhtml_function_coverage=1 00:10:11.696 --rc genhtml_legend=1 00:10:11.696 --rc geninfo_all_blocks=1 00:10:11.696 --rc geninfo_unexecuted_blocks=1 00:10:11.696 00:10:11.696 ' 00:10:11.696 14:11:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.696 --rc genhtml_branch_coverage=1 00:10:11.696 --rc genhtml_function_coverage=1 00:10:11.696 --rc genhtml_legend=1 00:10:11.696 --rc geninfo_all_blocks=1 00:10:11.696 --rc geninfo_unexecuted_blocks=1 00:10:11.696 00:10:11.696 ' 00:10:11.696 14:11:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:11.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.696 --rc genhtml_branch_coverage=1 00:10:11.696 --rc genhtml_function_coverage=1 00:10:11.696 --rc genhtml_legend=1 00:10:11.696 --rc geninfo_all_blocks=1 00:10:11.696 --rc geninfo_unexecuted_blocks=1 00:10:11.696 00:10:11.696 ' 00:10:11.696 14:11:12 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:11.696 14:11:12 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:11.696 14:11:12 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:11.696 14:11:12 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:11.696 14:11:12 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.696 14:11:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.696 14:11:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.696 14:11:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.696 14:11:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.696 14:11:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.696 14:11:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.696 14:11:12 -- paths/export.sh@5 -- # export PATH 00:10:11.696 14:11:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.696 14:11:12 -- nvme/functions.sh@10 -- # ctrls=() 00:10:11.696 14:11:12 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:11.696 14:11:12 -- nvme/functions.sh@11 -- # nvmes=() 00:10:11.696 14:11:12 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:11.696 14:11:12 -- nvme/functions.sh@12 -- # bdfs=() 00:10:11.696 14:11:12 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:11.696 14:11:12 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:11.696 14:11:12 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:11.696 14:11:12 -- nvme/functions.sh@14 -- # nvme_name= 00:10:11.696 14:11:12 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.696 14:11:12 -- nvme/nvme_scc.sh@12 -- # uname 00:10:11.696 14:11:12 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:11.696 14:11:12 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:11.696 14:11:12 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.696 Waiting for block devices as requested 00:10:11.696 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.696 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.696 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.696 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.996 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:16.996 14:11:17 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:16.996 14:11:17 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:16.996 14:11:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.996 14:11:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:16.996 14:11:17 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:16.996 14:11:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:16.996 14:11:17 -- scripts/common.sh@15 -- # local i 00:10:16.996 14:11:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:16.996 14:11:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:16.996 14:11:17 -- scripts/common.sh@24 -- # return 0 00:10:16.997 14:11:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:16.997 14:11:17 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:16.997 14:11:17 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@18 -- # shift 00:10:16.997 14:11:17 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.997 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:16.997 14:11:17 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:16.997 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.998 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.998 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:16.998 14:11:17 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:16.999 14:11:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:16.999 14:11:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:16.999 14:11:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:16.999 14:11:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:16.999 14:11:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.999 14:11:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:16.999 14:11:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:16.999 14:11:17 -- scripts/common.sh@15 -- # local i 00:10:16.999 14:11:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:16.999 14:11:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:16.999 14:11:17 -- scripts/common.sh@24 -- # return 0 00:10:16.999 14:11:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:16.999 14:11:17 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:16.999 14:11:17 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@18 -- # shift 00:10:16.999 14:11:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:16.999 14:11:17 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:16.999 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.999 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:17.000 14:11:17 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:17.000 14:11:17 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.000 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.000 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:17.000 14:11:18 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:17.001 14:11:18 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.001 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.001 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:17.002 14:11:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.002 14:11:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:17.002 14:11:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:17.002 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.002 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.002 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:17.002 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:17.002 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.003 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.003 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.003 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:17.004 14:11:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.004 14:11:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:17.004 14:11:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:17.004 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.004 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.004 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:17.004 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:17.004 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:17.005 14:11:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.005 14:11:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:17.005 14:11:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:17.005 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.005 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.005 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:17.005 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.005 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.006 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.006 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.006 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:17.007 14:11:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:17.007 14:11:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:17.007 14:11:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:17.007 14:11:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:17.007 14:11:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:17.007 14:11:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:17.007 14:11:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:17.007 14:11:18 -- scripts/common.sh@15 -- # local i 00:10:17.007 14:11:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:17.007 14:11:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:17.007 14:11:18 -- scripts/common.sh@24 -- # return 0 00:10:17.007 14:11:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:17.007 14:11:18 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:17.007 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.007 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:17.007 14:11:18 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.007 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.007 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:17.008 14:11:18 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.008 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.008 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:17.009 14:11:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.009 14:11:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:17.009 14:11:18 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:17.009 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.009 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.009 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:17.009 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.009 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.010 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.010 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.010 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:17.011 14:11:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:17.011 14:11:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:17.011 14:11:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:17.011 14:11:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:17.011 14:11:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:17.011 14:11:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:17.011 14:11:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:17.011 14:11:18 -- scripts/common.sh@15 -- # local i 00:10:17.011 14:11:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:17.011 14:11:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:17.011 14:11:18 -- scripts/common.sh@24 -- # return 0 00:10:17.011 14:11:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:17.011 14:11:18 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:17.011 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.011 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.011 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.011 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:17.011 14:11:18 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.012 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.012 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:17.012 14:11:18 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.013 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:17.013 14:11:18 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:17.013 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:17.014 14:11:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:17.014 14:11:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:17.014 14:11:18 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:17.014 14:11:18 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@18 -- # shift 00:10:17.014 14:11:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.014 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:17.014 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.014 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:17.015 14:11:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # IFS=: 00:10:17.015 14:11:18 -- nvme/functions.sh@21 -- # read -r reg val 00:10:17.015 14:11:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:17.015 14:11:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:17.015 14:11:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:17.015 14:11:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:17.015 14:11:18 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:17.015 14:11:18 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:17.015 14:11:18 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:17.015 14:11:18 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:17.015 14:11:18 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:17.015 14:11:18 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:17.015 14:11:18 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:17.015 14:11:18 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:17.015 14:11:18 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:17.015 14:11:18 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:17.015 14:11:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:17.015 14:11:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:17.015 14:11:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # echo nvme1 00:10:17.015 14:11:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:17.015 14:11:18 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:17.015 14:11:18 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:17.015 14:11:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:17.015 14:11:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:17.015 14:11:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # echo nvme0 00:10:17.015 14:11:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:17.015 14:11:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:17.015 14:11:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # echo nvme3 00:10:17.015 14:11:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:17.015 14:11:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:17.015 14:11:18 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:17.015 14:11:18 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:17.015 14:11:18 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:17.016 14:11:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:17.016 14:11:18 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:17.016 14:11:18 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:17.016 14:11:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:17.016 14:11:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:17.016 14:11:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:10:17.016 14:11:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:17.016 14:11:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:17.016 14:11:18 -- nvme/functions.sh@197 -- # echo nvme2 00:10:17.016 14:11:18 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:17.016 14:11:18 -- nvme/functions.sh@206 -- # echo nvme1 00:10:17.016 14:11:18 -- nvme/functions.sh@207 -- # return 0 00:10:17.016 14:11:18 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:17.016 14:11:18 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:10:17.016 14:11:18 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.960 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.960 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.960 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.960 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.960 14:11:19 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:17.960 14:11:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:17.960 14:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.960 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:10:17.960 ************************************ 00:10:17.960 START TEST nvme_simple_copy 00:10:17.960 ************************************ 00:10:17.960 14:11:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:10:18.222 Initializing NVMe Controllers 00:10:18.222 Attaching to 0000:00:08.0 00:10:18.222 Controller supports SCC. Attached to 0000:00:08.0 00:10:18.222 Namespace ID: 1 size: 4GB 00:10:18.222 Initialization complete. 00:10:18.222 00:10:18.222 Controller QEMU NVMe Ctrl (12342 ) 00:10:18.222 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:18.222 Namespace Block Size:4096 00:10:18.222 Writing LBAs 0 to 63 with Random Data 00:10:18.222 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:18.222 LBAs matching Written Data: 64 00:10:18.222 00:10:18.222 real 0m0.256s 00:10:18.222 user 0m0.092s 00:10:18.222 sys 0m0.062s 00:10:18.222 14:11:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.222 ************************************ 00:10:18.222 END TEST nvme_simple_copy 00:10:18.222 ************************************ 00:10:18.222 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:10:18.222 ************************************ 00:10:18.222 END TEST nvme_scc 00:10:18.222 ************************************ 00:10:18.222 00:10:18.222 real 0m7.712s 00:10:18.222 user 0m1.121s 00:10:18.222 sys 0m1.350s 00:10:18.222 14:11:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.222 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:10:18.484 14:11:19 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:10:18.484 14:11:19 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:18.484 14:11:19 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:10:18.484 14:11:19 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:10:18.484 14:11:19 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:18.484 14:11:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:18.484 14:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.484 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:10:18.484 ************************************ 00:10:18.484 START TEST nvme_fdp 00:10:18.484 ************************************ 00:10:18.484 14:11:19 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:10:18.484 * Looking for test storage... 00:10:18.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:18.484 14:11:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.484 14:11:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.484 14:11:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.484 14:11:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.484 14:11:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.484 14:11:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.484 14:11:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.484 14:11:19 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.484 14:11:19 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.484 14:11:19 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.484 14:11:19 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.484 14:11:19 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.484 14:11:19 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.484 14:11:19 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.484 14:11:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.484 14:11:19 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.484 14:11:19 -- scripts/common.sh@344 -- # : 1 00:10:18.484 14:11:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.484 14:11:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.484 14:11:19 -- scripts/common.sh@364 -- # decimal 1 00:10:18.485 14:11:19 -- scripts/common.sh@352 -- # local d=1 00:10:18.485 14:11:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.485 14:11:19 -- scripts/common.sh@354 -- # echo 1 00:10:18.485 14:11:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.485 14:11:19 -- scripts/common.sh@365 -- # decimal 2 00:10:18.485 14:11:19 -- scripts/common.sh@352 -- # local d=2 00:10:18.485 14:11:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.485 14:11:19 -- scripts/common.sh@354 -- # echo 2 00:10:18.485 14:11:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.485 14:11:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.485 14:11:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.485 14:11:19 -- scripts/common.sh@367 -- # return 0 00:10:18.485 14:11:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.485 14:11:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.485 --rc genhtml_branch_coverage=1 00:10:18.485 --rc genhtml_function_coverage=1 00:10:18.485 --rc genhtml_legend=1 00:10:18.485 --rc geninfo_all_blocks=1 00:10:18.485 --rc geninfo_unexecuted_blocks=1 00:10:18.485 00:10:18.485 ' 00:10:18.485 14:11:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.485 --rc genhtml_branch_coverage=1 00:10:18.485 --rc genhtml_function_coverage=1 00:10:18.485 --rc genhtml_legend=1 00:10:18.485 --rc geninfo_all_blocks=1 00:10:18.485 --rc geninfo_unexecuted_blocks=1 00:10:18.485 00:10:18.485 ' 00:10:18.485 14:11:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.485 --rc genhtml_branch_coverage=1 00:10:18.485 --rc genhtml_function_coverage=1 00:10:18.485 --rc genhtml_legend=1 00:10:18.485 --rc geninfo_all_blocks=1 00:10:18.485 --rc geninfo_unexecuted_blocks=1 00:10:18.485 00:10:18.485 ' 00:10:18.485 14:11:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.485 --rc genhtml_branch_coverage=1 00:10:18.485 --rc genhtml_function_coverage=1 00:10:18.485 --rc genhtml_legend=1 00:10:18.485 --rc geninfo_all_blocks=1 00:10:18.485 --rc geninfo_unexecuted_blocks=1 00:10:18.485 00:10:18.485 ' 00:10:18.485 14:11:19 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.485 14:11:19 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.485 14:11:19 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:18.485 14:11:19 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:18.485 14:11:19 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.485 14:11:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.485 14:11:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.485 14:11:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.485 14:11:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.485 14:11:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.485 14:11:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.485 14:11:19 -- paths/export.sh@5 -- # export PATH 00:10:18.485 14:11:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.485 14:11:19 -- nvme/functions.sh@10 -- # ctrls=() 00:10:18.485 14:11:19 -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:18.485 14:11:19 -- nvme/functions.sh@11 -- # nvmes=() 00:10:18.485 14:11:19 -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:18.485 14:11:19 -- nvme/functions.sh@12 -- # bdfs=() 00:10:18.485 14:11:19 -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:18.485 14:11:19 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:18.485 14:11:19 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:18.485 14:11:19 -- nvme/functions.sh@14 -- # nvme_name= 00:10:18.485 14:11:19 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.485 14:11:19 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:19.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:19.079 Waiting for block devices as requested 00:10:19.079 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.079 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.342 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.342 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.643 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:24.643 14:11:25 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:24.643 14:11:25 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:24.643 14:11:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.643 14:11:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:10:24.643 14:11:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:10:24.643 14:11:25 -- scripts/common.sh@15 -- # local i 00:10:24.643 14:11:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:10:24.643 14:11:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.643 14:11:25 -- scripts/common.sh@24 -- # return 0 00:10:24.643 14:11:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:24.643 14:11:25 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:24.643 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.643 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.643 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:24.643 14:11:25 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.643 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.644 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:10:24.644 14:11:25 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.644 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.645 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.645 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:24.645 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:24.646 14:11:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:24.646 14:11:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:24.646 14:11:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:10:24.646 14:11:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:24.646 14:11:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.646 14:11:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:10:24.646 14:11:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:10:24.646 14:11:25 -- scripts/common.sh@15 -- # local i 00:10:24.646 14:11:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:10:24.646 14:11:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.646 14:11:25 -- scripts/common.sh@24 -- # return 0 00:10:24.646 14:11:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:24.646 14:11:25 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:24.646 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.646 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.646 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.646 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.646 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.647 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.647 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:24.647 14:11:25 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:24.648 14:11:25 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.648 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.648 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:24.649 14:11:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.649 14:11:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:24.649 14:11:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:24.649 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.649 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.649 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:24.649 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.649 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:24.650 14:11:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.650 14:11:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:10:24.650 14:11:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:10:24.650 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.650 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.650 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.650 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.650 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.651 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.651 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.651 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:10:24.652 14:11:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.652 14:11:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:10:24.652 14:11:25 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:10:24.652 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.652 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.652 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.652 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.652 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:10:24.653 14:11:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:24.653 14:11:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:24.653 14:11:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:10:24.653 14:11:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:24.653 14:11:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.653 14:11:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:10:24.653 14:11:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:10:24.653 14:11:25 -- scripts/common.sh@15 -- # local i 00:10:24.653 14:11:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:10:24.653 14:11:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.653 14:11:25 -- scripts/common.sh@24 -- # return 0 00:10:24.653 14:11:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:24.653 14:11:25 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:24.653 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.653 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.653 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:24.653 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.653 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:24.654 14:11:25 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.654 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.654 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.655 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:24.655 14:11:25 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.655 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:24.656 14:11:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.656 14:11:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:24.656 14:11:25 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:24.656 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.656 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.656 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.656 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:24.656 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:24.657 14:11:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:24.657 14:11:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:24.657 14:11:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:10:24.657 14:11:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:24.657 14:11:25 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:24.657 14:11:25 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:10:24.657 14:11:25 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:10:24.657 14:11:25 -- scripts/common.sh@15 -- # local i 00:10:24.657 14:11:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:10:24.657 14:11:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:24.657 14:11:25 -- scripts/common.sh@24 -- # return 0 00:10:24.657 14:11:25 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:24.657 14:11:25 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:24.657 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.657 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.657 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.657 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:24.657 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.658 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.658 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.658 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.659 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:24.659 14:11:25 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:24.659 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:24.660 14:11:25 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:24.660 14:11:25 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:10:24.660 14:11:25 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:10:24.660 14:11:25 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@18 -- # shift 00:10:24.660 14:11:25 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.660 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.660 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:10:24.660 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:24.661 14:11:25 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # IFS=: 00:10:24.661 14:11:25 -- nvme/functions.sh@21 -- # read -r reg val 00:10:24.661 14:11:25 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:10:24.661 14:11:25 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:24.661 14:11:25 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:24.661 14:11:25 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:10:24.661 14:11:25 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:24.661 14:11:25 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:24.661 14:11:25 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:24.662 14:11:25 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:24.662 14:11:25 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:24.662 14:11:25 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:24.662 14:11:25 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:24.662 14:11:25 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:24.662 14:11:25 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:24.662 14:11:25 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.662 14:11:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:24.662 14:11:25 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:24.662 14:11:25 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:24.662 14:11:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:24.662 14:11:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:24.662 14:11:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.662 14:11:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.662 14:11:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:24.662 14:11:25 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:24.662 14:11:25 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:24.662 14:11:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:24.662 14:11:25 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@76 -- # echo 0x88010 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:24.662 14:11:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.662 14:11:25 -- nvme/functions.sh@197 -- # echo nvme0 00:10:24.662 14:11:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.662 14:11:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:24.662 14:11:25 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:24.662 14:11:25 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:24.662 14:11:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:24.662 14:11:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:24.662 14:11:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.662 14:11:25 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:24.662 14:11:25 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:24.662 14:11:25 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:24.662 14:11:25 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:24.662 14:11:25 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:24.662 14:11:25 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:24.662 14:11:25 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:24.662 14:11:25 -- nvme/functions.sh@76 -- # echo 0x8000 00:10:24.662 14:11:25 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:24.662 14:11:25 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:24.662 14:11:25 -- nvme/functions.sh@204 -- # trap - ERR 00:10:24.662 14:11:25 -- nvme/functions.sh@204 -- # print_backtrace 00:10:24.662 14:11:25 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:10:24.662 14:11:25 -- common/autotest_common.sh@1142 -- # return 0 00:10:24.662 14:11:25 -- nvme/functions.sh@204 -- # trap - ERR 00:10:24.662 14:11:25 -- nvme/functions.sh@204 -- # print_backtrace 00:10:24.662 14:11:25 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:10:24.662 14:11:25 -- common/autotest_common.sh@1142 -- # return 0 00:10:24.662 14:11:25 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:24.662 14:11:25 -- nvme/functions.sh@206 -- # echo nvme0 00:10:24.662 14:11:25 -- nvme/functions.sh@207 -- # return 0 00:10:24.662 14:11:25 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:10:24.662 14:11:25 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:10:24.662 14:11:25 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:25.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.647 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.647 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.647 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.647 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:25.909 14:11:27 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:10:25.909 14:11:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:25.909 14:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.909 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:10:25.909 ************************************ 00:10:25.909 START TEST nvme_flexible_data_placement 00:10:25.909 ************************************ 00:10:25.909 14:11:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:10:26.171 Initializing NVMe Controllers 00:10:26.171 Attaching to 0000:00:09.0 00:10:26.171 Controller supports FDP Attached to 0000:00:09.0 00:10:26.171 Namespace ID: 1 Endurance Group ID: 1 00:10:26.171 Initialization complete. 00:10:26.171 00:10:26.171 ================================== 00:10:26.171 == FDP tests for Namespace: #01 == 00:10:26.171 ================================== 00:10:26.171 00:10:26.171 Get Feature: FDP: 00:10:26.171 ================= 00:10:26.171 Enabled: Yes 00:10:26.171 FDP configuration Index: 0 00:10:26.171 00:10:26.171 FDP configurations log page 00:10:26.171 =========================== 00:10:26.171 Number of FDP configurations: 1 00:10:26.171 Version: 0 00:10:26.171 Size: 112 00:10:26.171 FDP Configuration Descriptor: 0 00:10:26.171 Descriptor Size: 96 00:10:26.171 Reclaim Group Identifier format: 2 00:10:26.171 FDP Volatile Write Cache: Not Present 00:10:26.171 FDP Configuration: Valid 00:10:26.171 Vendor Specific Size: 0 00:10:26.171 Number of Reclaim Groups: 2 00:10:26.171 Number of Recalim Unit Handles: 8 00:10:26.171 Max Placement Identifiers: 128 00:10:26.171 Number of Namespaces Suppprted: 256 00:10:26.171 Reclaim unit Nominal Size: 6000000 bytes 00:10:26.171 Estimated Reclaim Unit Time Limit: Not Reported 00:10:26.171 RUH Desc #000: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #001: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #002: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #003: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #004: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #005: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #006: RUH Type: Initially Isolated 00:10:26.171 RUH Desc #007: RUH Type: Initially Isolated 00:10:26.171 00:10:26.171 FDP reclaim unit handle usage log page 00:10:26.171 ====================================== 00:10:26.171 Number of Reclaim Unit Handles: 8 00:10:26.171 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:26.171 RUH Usage Desc #001: RUH Attributes: Unused 00:10:26.171 RUH Usage Desc #002: RUH Attributes: Unused 00:10:26.171 RUH Usage Desc #003: RUH Attributes: Unused 00:10:26.171 RUH Usage Desc #004: RUH Attributes: Unused 00:10:26.171 RUH Usage Desc #005: RUH Attributes: Unused 00:10:26.171 RUH Usage Desc #006: RUH Attributes: Unused 00:10:26.171 RUH Usage Desc #007: RUH Attributes: Unused 00:10:26.171 00:10:26.171 FDP statistics log page 00:10:26.171 ======================= 00:10:26.171 Host bytes with metadata written: 993927168 00:10:26.171 Media bytes with metadata written: 994197504 00:10:26.171 Media bytes erased: 0 00:10:26.171 00:10:26.171 FDP Reclaim unit handle status 00:10:26.171 ============================== 00:10:26.171 Number of RUHS descriptors: 2 00:10:26.171 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000c1e 00:10:26.171 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:26.171 00:10:26.171 FDP write on placement id: 0 success 00:10:26.171 00:10:26.171 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:26.171 00:10:26.171 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:26.171 00:10:26.171 Get Feature: FDP Events for Placement handle: #0 00:10:26.171 ======================== 00:10:26.171 Number of FDP Events: 6 00:10:26.171 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:26.171 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:26.171 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:26.171 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:26.171 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:26.171 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:26.171 00:10:26.171 FDP events log page 00:10:26.171 =================== 00:10:26.171 Number of FDP events: 1 00:10:26.171 FDP Event #0: 00:10:26.171 Event Type: RU Not Written to Capacity 00:10:26.171 Placement Identifier: Valid 00:10:26.171 NSID: Valid 00:10:26.171 Location: Valid 00:10:26.171 Placement Identifier: 0 00:10:26.171 Event Timestamp: b 00:10:26.171 Namespace Identifier: 1 00:10:26.171 Reclaim Group Identifier: 0 00:10:26.171 Reclaim Unit Handle Identifier: 0 00:10:26.171 00:10:26.171 FDP test passed 00:10:26.171 ************************************ 00:10:26.171 END TEST nvme_flexible_data_placement 00:10:26.171 ************************************ 00:10:26.171 00:10:26.171 real 0m0.238s 00:10:26.171 user 0m0.066s 00:10:26.171 sys 0m0.070s 00:10:26.171 14:11:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.171 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:10:26.171 ************************************ 00:10:26.171 END TEST nvme_fdp 00:10:26.171 ************************************ 00:10:26.171 00:10:26.171 real 0m7.706s 00:10:26.171 user 0m1.050s 00:10:26.171 sys 0m1.433s 00:10:26.171 14:11:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.171 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:10:26.171 14:11:27 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:10:26.171 14:11:27 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:26.171 14:11:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:26.171 14:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.171 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:10:26.171 ************************************ 00:10:26.171 START TEST nvme_rpc 00:10:26.171 ************************************ 00:10:26.171 14:11:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:26.171 * Looking for test storage... 00:10:26.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:26.171 14:11:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:26.171 14:11:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:26.171 14:11:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:26.171 14:11:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:26.171 14:11:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:26.171 14:11:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:26.171 14:11:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:26.171 14:11:27 -- scripts/common.sh@335 -- # IFS=.-: 00:10:26.171 14:11:27 -- scripts/common.sh@335 -- # read -ra ver1 00:10:26.171 14:11:27 -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.171 14:11:27 -- scripts/common.sh@336 -- # read -ra ver2 00:10:26.171 14:11:27 -- scripts/common.sh@337 -- # local 'op=<' 00:10:26.171 14:11:27 -- scripts/common.sh@339 -- # ver1_l=2 00:10:26.171 14:11:27 -- scripts/common.sh@340 -- # ver2_l=1 00:10:26.171 14:11:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:26.171 14:11:27 -- scripts/common.sh@343 -- # case "$op" in 00:10:26.171 14:11:27 -- scripts/common.sh@344 -- # : 1 00:10:26.171 14:11:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:26.171 14:11:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.433 14:11:27 -- scripts/common.sh@364 -- # decimal 1 00:10:26.433 14:11:27 -- scripts/common.sh@352 -- # local d=1 00:10:26.434 14:11:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.434 14:11:27 -- scripts/common.sh@354 -- # echo 1 00:10:26.434 14:11:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:26.434 14:11:27 -- scripts/common.sh@365 -- # decimal 2 00:10:26.434 14:11:27 -- scripts/common.sh@352 -- # local d=2 00:10:26.434 14:11:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.434 14:11:27 -- scripts/common.sh@354 -- # echo 2 00:10:26.434 14:11:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:26.434 14:11:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:26.434 14:11:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:26.434 14:11:27 -- scripts/common.sh@367 -- # return 0 00:10:26.434 14:11:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.434 14:11:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:26.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.434 --rc genhtml_branch_coverage=1 00:10:26.434 --rc genhtml_function_coverage=1 00:10:26.434 --rc genhtml_legend=1 00:10:26.434 --rc geninfo_all_blocks=1 00:10:26.434 --rc geninfo_unexecuted_blocks=1 00:10:26.434 00:10:26.434 ' 00:10:26.434 14:11:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:26.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.434 --rc genhtml_branch_coverage=1 00:10:26.434 --rc genhtml_function_coverage=1 00:10:26.434 --rc genhtml_legend=1 00:10:26.434 --rc geninfo_all_blocks=1 00:10:26.434 --rc geninfo_unexecuted_blocks=1 00:10:26.434 00:10:26.434 ' 00:10:26.434 14:11:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:26.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.434 --rc genhtml_branch_coverage=1 00:10:26.434 --rc genhtml_function_coverage=1 00:10:26.434 --rc genhtml_legend=1 00:10:26.434 --rc geninfo_all_blocks=1 00:10:26.434 --rc geninfo_unexecuted_blocks=1 00:10:26.434 00:10:26.434 ' 00:10:26.434 14:11:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:26.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.434 --rc genhtml_branch_coverage=1 00:10:26.434 --rc genhtml_function_coverage=1 00:10:26.434 --rc genhtml_legend=1 00:10:26.434 --rc geninfo_all_blocks=1 00:10:26.434 --rc geninfo_unexecuted_blocks=1 00:10:26.434 00:10:26.434 ' 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:26.434 14:11:27 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:26.434 14:11:27 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:26.434 14:11:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:26.434 14:11:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:26.434 14:11:27 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:26.434 14:11:27 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:26.434 14:11:27 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:26.434 14:11:27 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:26.434 14:11:27 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:26.434 14:11:27 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:26.434 14:11:27 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:26.434 14:11:27 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66380 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66380 00:10:26.434 14:11:27 -- common/autotest_common.sh@829 -- # '[' -z 66380 ']' 00:10:26.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.434 14:11:27 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:26.434 14:11:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.434 14:11:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.434 14:11:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.434 14:11:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.434 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:10:26.434 [2024-12-04 14:11:27.787356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:26.434 [2024-12-04 14:11:27.787497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66380 ] 00:10:26.695 [2024-12-04 14:11:27.934710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:26.956 [2024-12-04 14:11:28.160495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.956 [2024-12-04 14:11:28.161128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.956 [2024-12-04 14:11:28.161181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.901 14:11:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.901 14:11:29 -- common/autotest_common.sh@862 -- # return 0 00:10:27.901 14:11:29 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:10:28.163 Nvme0n1 00:10:28.163 14:11:29 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:28.163 14:11:29 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:28.422 request: 00:10:28.422 { 00:10:28.422 "filename": "non_existing_file", 00:10:28.422 "bdev_name": "Nvme0n1", 00:10:28.422 "method": "bdev_nvme_apply_firmware", 00:10:28.422 "req_id": 1 00:10:28.422 } 00:10:28.422 Got JSON-RPC error response 00:10:28.422 response: 00:10:28.422 { 00:10:28.422 "code": -32603, 00:10:28.422 "message": "open file failed." 00:10:28.422 } 00:10:28.422 14:11:29 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:28.422 14:11:29 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:28.422 14:11:29 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:28.681 14:11:29 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:28.681 14:11:29 -- nvme/nvme_rpc.sh@40 -- # killprocess 66380 00:10:28.681 14:11:29 -- common/autotest_common.sh@936 -- # '[' -z 66380 ']' 00:10:28.681 14:11:29 -- common/autotest_common.sh@940 -- # kill -0 66380 00:10:28.681 14:11:29 -- common/autotest_common.sh@941 -- # uname 00:10:28.681 14:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.681 14:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66380 00:10:28.681 14:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:28.681 14:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:28.681 killing process with pid 66380 00:10:28.681 14:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66380' 00:10:28.681 14:11:29 -- common/autotest_common.sh@955 -- # kill 66380 00:10:28.681 14:11:29 -- common/autotest_common.sh@960 -- # wait 66380 00:10:29.618 00:10:29.618 real 0m3.579s 00:10:29.618 user 0m6.683s 00:10:29.618 sys 0m0.621s 00:10:29.618 ************************************ 00:10:29.618 END TEST nvme_rpc 00:10:29.618 ************************************ 00:10:29.618 14:11:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:29.618 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:29.878 14:11:31 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:29.878 14:11:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:29.878 14:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:29.878 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:29.878 ************************************ 00:10:29.878 START TEST nvme_rpc_timeouts 00:10:29.878 ************************************ 00:10:29.878 14:11:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:29.878 * Looking for test storage... 00:10:29.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:29.878 14:11:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:29.878 14:11:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:29.878 14:11:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:29.878 14:11:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:29.878 14:11:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:29.878 14:11:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:29.878 14:11:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:29.878 14:11:31 -- scripts/common.sh@335 -- # IFS=.-: 00:10:29.878 14:11:31 -- scripts/common.sh@335 -- # read -ra ver1 00:10:29.878 14:11:31 -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.878 14:11:31 -- scripts/common.sh@336 -- # read -ra ver2 00:10:29.878 14:11:31 -- scripts/common.sh@337 -- # local 'op=<' 00:10:29.878 14:11:31 -- scripts/common.sh@339 -- # ver1_l=2 00:10:29.878 14:11:31 -- scripts/common.sh@340 -- # ver2_l=1 00:10:29.878 14:11:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:29.878 14:11:31 -- scripts/common.sh@343 -- # case "$op" in 00:10:29.878 14:11:31 -- scripts/common.sh@344 -- # : 1 00:10:29.878 14:11:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:29.878 14:11:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.878 14:11:31 -- scripts/common.sh@364 -- # decimal 1 00:10:29.878 14:11:31 -- scripts/common.sh@352 -- # local d=1 00:10:29.878 14:11:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.878 14:11:31 -- scripts/common.sh@354 -- # echo 1 00:10:29.878 14:11:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:29.878 14:11:31 -- scripts/common.sh@365 -- # decimal 2 00:10:29.878 14:11:31 -- scripts/common.sh@352 -- # local d=2 00:10:29.878 14:11:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.878 14:11:31 -- scripts/common.sh@354 -- # echo 2 00:10:29.878 14:11:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:29.878 14:11:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:29.878 14:11:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:29.878 14:11:31 -- scripts/common.sh@367 -- # return 0 00:10:29.878 14:11:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.878 14:11:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:29.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.878 --rc genhtml_branch_coverage=1 00:10:29.878 --rc genhtml_function_coverage=1 00:10:29.878 --rc genhtml_legend=1 00:10:29.878 --rc geninfo_all_blocks=1 00:10:29.878 --rc geninfo_unexecuted_blocks=1 00:10:29.878 00:10:29.878 ' 00:10:29.878 14:11:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:29.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.878 --rc genhtml_branch_coverage=1 00:10:29.878 --rc genhtml_function_coverage=1 00:10:29.878 --rc genhtml_legend=1 00:10:29.878 --rc geninfo_all_blocks=1 00:10:29.878 --rc geninfo_unexecuted_blocks=1 00:10:29.878 00:10:29.878 ' 00:10:29.878 14:11:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:29.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.878 --rc genhtml_branch_coverage=1 00:10:29.878 --rc genhtml_function_coverage=1 00:10:29.878 --rc genhtml_legend=1 00:10:29.878 --rc geninfo_all_blocks=1 00:10:29.878 --rc geninfo_unexecuted_blocks=1 00:10:29.879 00:10:29.879 ' 00:10:29.879 14:11:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:29.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.879 --rc genhtml_branch_coverage=1 00:10:29.879 --rc genhtml_function_coverage=1 00:10:29.879 --rc genhtml_legend=1 00:10:29.879 --rc geninfo_all_blocks=1 00:10:29.879 --rc geninfo_unexecuted_blocks=1 00:10:29.879 00:10:29.879 ' 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66453 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66453 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66489 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66489 00:10:29.879 14:11:31 -- common/autotest_common.sh@829 -- # '[' -z 66489 ']' 00:10:29.879 14:11:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.879 14:11:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.879 14:11:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.879 14:11:31 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:29.879 14:11:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.879 14:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:29.879 [2024-12-04 14:11:31.340741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:29.879 [2024-12-04 14:11:31.340861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66489 ] 00:10:30.138 [2024-12-04 14:11:31.488689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:30.397 [2024-12-04 14:11:31.640276] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:30.397 [2024-12-04 14:11:31.640866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.397 [2024-12-04 14:11:31.640979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.965 14:11:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.965 14:11:32 -- common/autotest_common.sh@862 -- # return 0 00:10:30.965 Checking default timeout settings: 00:10:30.965 14:11:32 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:30.965 14:11:32 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:31.224 Making settings changes with rpc: 00:10:31.224 14:11:32 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:31.224 14:11:32 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:31.224 Check default vs. modified settings: 00:10:31.224 14:11:32 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:31.224 14:11:32 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:31.482 Setting action_on_timeout is changed as expected. 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:31.482 Setting timeout_us is changed as expected. 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:31.482 Setting timeout_admin_us is changed as expected. 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66453 /tmp/settings_modified_66453 00:10:31.482 14:11:32 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66489 00:10:31.482 14:11:32 -- common/autotest_common.sh@936 -- # '[' -z 66489 ']' 00:10:31.482 14:11:32 -- common/autotest_common.sh@940 -- # kill -0 66489 00:10:31.482 14:11:32 -- common/autotest_common.sh@941 -- # uname 00:10:31.482 14:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:31.482 14:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66489 00:10:31.741 14:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:31.741 14:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:31.741 killing process with pid 66489 00:10:31.741 14:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66489' 00:10:31.741 14:11:32 -- common/autotest_common.sh@955 -- # kill 66489 00:10:31.741 14:11:32 -- common/autotest_common.sh@960 -- # wait 66489 00:10:32.678 RPC TIMEOUT SETTING TEST PASSED. 00:10:32.678 14:11:34 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:32.678 00:10:32.678 real 0m2.987s 00:10:32.678 user 0m5.659s 00:10:32.678 sys 0m0.439s 00:10:32.678 14:11:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.678 ************************************ 00:10:32.678 END TEST nvme_rpc_timeouts 00:10:32.678 ************************************ 00:10:32.678 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:32.938 14:11:34 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:10:32.938 14:11:34 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:10:32.938 14:11:34 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:10:32.938 14:11:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.938 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:32.938 ************************************ 00:10:32.938 START TEST nvme_xnvme 00:10:32.938 ************************************ 00:10:32.938 14:11:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:10:32.938 * Looking for test storage... 00:10:32.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:10:32.938 14:11:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:32.938 14:11:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:32.938 14:11:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:32.938 14:11:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:32.938 14:11:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:32.938 14:11:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:32.938 14:11:34 -- scripts/common.sh@335 -- # IFS=.-: 00:10:32.938 14:11:34 -- scripts/common.sh@335 -- # read -ra ver1 00:10:32.938 14:11:34 -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.938 14:11:34 -- scripts/common.sh@336 -- # read -ra ver2 00:10:32.938 14:11:34 -- scripts/common.sh@337 -- # local 'op=<' 00:10:32.938 14:11:34 -- scripts/common.sh@339 -- # ver1_l=2 00:10:32.938 14:11:34 -- scripts/common.sh@340 -- # ver2_l=1 00:10:32.938 14:11:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:32.938 14:11:34 -- scripts/common.sh@343 -- # case "$op" in 00:10:32.938 14:11:34 -- scripts/common.sh@344 -- # : 1 00:10:32.938 14:11:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:32.938 14:11:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.938 14:11:34 -- scripts/common.sh@364 -- # decimal 1 00:10:32.938 14:11:34 -- scripts/common.sh@352 -- # local d=1 00:10:32.938 14:11:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.938 14:11:34 -- scripts/common.sh@354 -- # echo 1 00:10:32.938 14:11:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:32.938 14:11:34 -- scripts/common.sh@365 -- # decimal 2 00:10:32.938 14:11:34 -- scripts/common.sh@352 -- # local d=2 00:10:32.938 14:11:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.938 14:11:34 -- scripts/common.sh@354 -- # echo 2 00:10:32.938 14:11:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:32.938 14:11:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:32.938 14:11:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:32.938 14:11:34 -- scripts/common.sh@367 -- # return 0 00:10:32.938 14:11:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:32.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.938 --rc genhtml_branch_coverage=1 00:10:32.938 --rc genhtml_function_coverage=1 00:10:32.938 --rc genhtml_legend=1 00:10:32.938 --rc geninfo_all_blocks=1 00:10:32.938 --rc geninfo_unexecuted_blocks=1 00:10:32.938 00:10:32.938 ' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:32.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.938 --rc genhtml_branch_coverage=1 00:10:32.938 --rc genhtml_function_coverage=1 00:10:32.938 --rc genhtml_legend=1 00:10:32.938 --rc geninfo_all_blocks=1 00:10:32.938 --rc geninfo_unexecuted_blocks=1 00:10:32.938 00:10:32.938 ' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:32.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.938 --rc genhtml_branch_coverage=1 00:10:32.938 --rc genhtml_function_coverage=1 00:10:32.938 --rc genhtml_legend=1 00:10:32.938 --rc geninfo_all_blocks=1 00:10:32.938 --rc geninfo_unexecuted_blocks=1 00:10:32.938 00:10:32.938 ' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:32.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.938 --rc genhtml_branch_coverage=1 00:10:32.938 --rc genhtml_function_coverage=1 00:10:32.938 --rc genhtml_legend=1 00:10:32.938 --rc geninfo_all_blocks=1 00:10:32.938 --rc geninfo_unexecuted_blocks=1 00:10:32.938 00:10:32.938 ' 00:10:32.938 14:11:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.938 14:11:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.938 14:11:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.938 14:11:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.938 14:11:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.938 14:11:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.938 14:11:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.938 14:11:34 -- paths/export.sh@5 -- # export PATH 00:10:32.938 14:11:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:10:32.938 14:11:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:32.938 14:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.938 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:32.938 ************************************ 00:10:32.938 START TEST xnvme_to_malloc_dd_copy 00:10:32.938 ************************************ 00:10:32.938 14:11:34 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:10:32.938 14:11:34 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:10:32.938 14:11:34 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:10:32.938 14:11:34 -- dd/common.sh@191 -- # return 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@18 -- # local io 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:10:32.938 14:11:34 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:10:32.939 14:11:34 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:10:32.939 14:11:34 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:10:32.939 14:11:34 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:10:32.939 14:11:34 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:10:32.939 14:11:34 -- xnvme/xnvme.sh@42 -- # gen_conf 00:10:32.939 14:11:34 -- dd/common.sh@31 -- # xtrace_disable 00:10:32.939 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:32.939 { 00:10:32.939 "subsystems": [ 00:10:32.939 { 00:10:32.939 "subsystem": "bdev", 00:10:32.939 "config": [ 00:10:32.939 { 00:10:32.939 "params": { 00:10:32.939 "block_size": 512, 00:10:32.939 "num_blocks": 2097152, 00:10:32.939 "name": "malloc0" 00:10:32.939 }, 00:10:32.939 "method": "bdev_malloc_create" 00:10:32.939 }, 00:10:32.939 { 00:10:32.939 "params": { 00:10:32.939 "io_mechanism": "libaio", 00:10:32.939 "filename": "/dev/nullb0", 00:10:32.939 "name": "null0" 00:10:32.939 }, 00:10:32.939 "method": "bdev_xnvme_create" 00:10:32.939 }, 00:10:32.939 { 00:10:32.939 "method": "bdev_wait_for_examine" 00:10:32.939 } 00:10:32.939 ] 00:10:32.939 } 00:10:32.939 ] 00:10:32.939 } 00:10:33.200 [2024-12-04 14:11:34.405001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:33.200 [2024-12-04 14:11:34.405102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66609 ] 00:10:33.200 [2024-12-04 14:11:34.549918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.463 [2024-12-04 14:11:34.749735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.379  [2024-12-04T14:11:37.786Z] Copying: 234/1024 [MB] (234 MBps) [2024-12-04T14:11:39.162Z] Copying: 483/1024 [MB] (249 MBps) [2024-12-04T14:11:39.729Z] Copying: 794/1024 [MB] (311 MBps) [2024-12-04T14:11:41.633Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:10:40.168 00:10:40.168 14:11:41 -- xnvme/xnvme.sh@47 -- # gen_conf 00:10:40.168 14:11:41 -- dd/common.sh@31 -- # xtrace_disable 00:10:40.168 14:11:41 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:10:40.168 14:11:41 -- common/autotest_common.sh@10 -- # set +x 00:10:40.168 { 00:10:40.168 "subsystems": [ 00:10:40.168 { 00:10:40.168 "subsystem": "bdev", 00:10:40.168 "config": [ 00:10:40.168 { 00:10:40.168 "params": { 00:10:40.168 "block_size": 512, 00:10:40.168 "num_blocks": 2097152, 00:10:40.168 "name": "malloc0" 00:10:40.168 }, 00:10:40.168 "method": "bdev_malloc_create" 00:10:40.168 }, 00:10:40.168 { 00:10:40.168 "params": { 00:10:40.168 "io_mechanism": "libaio", 00:10:40.168 "filename": "/dev/nullb0", 00:10:40.168 "name": "null0" 00:10:40.168 }, 00:10:40.168 "method": "bdev_xnvme_create" 00:10:40.168 }, 00:10:40.168 { 00:10:40.168 "method": "bdev_wait_for_examine" 00:10:40.168 } 00:10:40.168 ] 00:10:40.168 } 00:10:40.168 ] 00:10:40.168 } 00:10:40.168 [2024-12-04 14:11:41.562357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.168 [2024-12-04 14:11:41.562461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66700 ] 00:10:40.429 [2024-12-04 14:11:41.708409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.429 [2024-12-04 14:11:41.848406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.333  [2024-12-04T14:11:44.733Z] Copying: 313/1024 [MB] (313 MBps) [2024-12-04T14:11:45.666Z] Copying: 628/1024 [MB] (314 MBps) [2024-12-04T14:11:45.924Z] Copying: 942/1024 [MB] (314 MBps) [2024-12-04T14:11:48.456Z] Copying: 1024/1024 [MB] (average 314 MBps) 00:10:46.991 00:10:46.991 14:11:47 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:10:46.991 14:11:47 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:10:46.991 14:11:47 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:10:46.991 14:11:47 -- xnvme/xnvme.sh@42 -- # gen_conf 00:10:46.991 14:11:47 -- dd/common.sh@31 -- # xtrace_disable 00:10:46.991 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:10:46.991 { 00:10:46.991 "subsystems": [ 00:10:46.991 { 00:10:46.991 "subsystem": "bdev", 00:10:46.991 "config": [ 00:10:46.991 { 00:10:46.991 "params": { 00:10:46.991 "block_size": 512, 00:10:46.991 "num_blocks": 2097152, 00:10:46.991 "name": "malloc0" 00:10:46.991 }, 00:10:46.991 "method": "bdev_malloc_create" 00:10:46.991 }, 00:10:46.991 { 00:10:46.991 "params": { 00:10:46.991 "io_mechanism": "io_uring", 00:10:46.991 "filename": "/dev/nullb0", 00:10:46.991 "name": "null0" 00:10:46.991 }, 00:10:46.991 "method": "bdev_xnvme_create" 00:10:46.991 }, 00:10:46.991 { 00:10:46.991 "method": "bdev_wait_for_examine" 00:10:46.991 } 00:10:46.991 ] 00:10:46.991 } 00:10:46.991 ] 00:10:46.991 } 00:10:46.991 [2024-12-04 14:11:47.942977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.991 [2024-12-04 14:11:47.943101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66776 ] 00:10:46.991 [2024-12-04 14:11:48.091752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.991 [2024-12-04 14:11:48.231223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.893  [2024-12-04T14:11:51.292Z] Copying: 321/1024 [MB] (321 MBps) [2024-12-04T14:11:52.227Z] Copying: 643/1024 [MB] (321 MBps) [2024-12-04T14:11:52.227Z] Copying: 965/1024 [MB] (322 MBps) [2024-12-04T14:11:54.136Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:10:52.671 00:10:52.930 14:11:54 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:10:52.930 14:11:54 -- xnvme/xnvme.sh@47 -- # gen_conf 00:10:52.930 14:11:54 -- dd/common.sh@31 -- # xtrace_disable 00:10:52.930 14:11:54 -- common/autotest_common.sh@10 -- # set +x 00:10:52.930 { 00:10:52.930 "subsystems": [ 00:10:52.930 { 00:10:52.930 "subsystem": "bdev", 00:10:52.930 "config": [ 00:10:52.930 { 00:10:52.930 "params": { 00:10:52.930 "block_size": 512, 00:10:52.930 "num_blocks": 2097152, 00:10:52.930 "name": "malloc0" 00:10:52.930 }, 00:10:52.930 "method": "bdev_malloc_create" 00:10:52.930 }, 00:10:52.930 { 00:10:52.930 "params": { 00:10:52.930 "io_mechanism": "io_uring", 00:10:52.930 "filename": "/dev/nullb0", 00:10:52.930 "name": "null0" 00:10:52.930 }, 00:10:52.930 "method": "bdev_xnvme_create" 00:10:52.930 }, 00:10:52.930 { 00:10:52.930 "method": "bdev_wait_for_examine" 00:10:52.930 } 00:10:52.930 ] 00:10:52.930 } 00:10:52.930 ] 00:10:52.930 } 00:10:52.930 [2024-12-04 14:11:54.197552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:52.930 [2024-12-04 14:11:54.197659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66854 ] 00:10:52.930 [2024-12-04 14:11:54.344067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.189 [2024-12-04 14:11:54.485240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.087  [2024-12-04T14:11:57.485Z] Copying: 327/1024 [MB] (327 MBps) [2024-12-04T14:11:58.422Z] Copying: 655/1024 [MB] (327 MBps) [2024-12-04T14:11:58.422Z] Copying: 983/1024 [MB] (327 MBps) [2024-12-04T14:12:00.420Z] Copying: 1024/1024 [MB] (average 327 MBps) 00:10:58.955 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:10:58.955 14:12:00 -- dd/common.sh@195 -- # modprobe -r null_blk 00:10:58.955 ************************************ 00:10:58.955 END TEST xnvme_to_malloc_dd_copy 00:10:58.955 00:10:58.955 real 0m26.005s 00:10:58.955 user 0m22.978s 00:10:58.955 sys 0m2.499s 00:10:58.955 14:12:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:58.955 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:10:58.955 ************************************ 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:10:58.955 14:12:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:58.955 14:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:58.955 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:10:58.955 ************************************ 00:10:58.955 START TEST xnvme_bdevperf 00:10:58.955 ************************************ 00:10:58.955 14:12:00 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:10:58.955 14:12:00 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:10:58.955 14:12:00 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:10:58.955 14:12:00 -- dd/common.sh@191 -- # return 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@60 -- # local io 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:10:58.955 14:12:00 -- xnvme/xnvme.sh@74 -- # gen_conf 00:10:58.955 14:12:00 -- dd/common.sh@31 -- # xtrace_disable 00:10:58.955 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:10:59.215 { 00:10:59.215 "subsystems": [ 00:10:59.215 { 00:10:59.215 "subsystem": "bdev", 00:10:59.215 "config": [ 00:10:59.215 { 00:10:59.215 "params": { 00:10:59.215 "io_mechanism": "libaio", 00:10:59.215 "filename": "/dev/nullb0", 00:10:59.215 "name": "null0" 00:10:59.215 }, 00:10:59.215 "method": "bdev_xnvme_create" 00:10:59.215 }, 00:10:59.215 { 00:10:59.215 "method": "bdev_wait_for_examine" 00:10:59.215 } 00:10:59.215 ] 00:10:59.215 } 00:10:59.215 ] 00:10:59.215 } 00:10:59.215 [2024-12-04 14:12:00.461275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:59.215 [2024-12-04 14:12:00.461376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66954 ] 00:10:59.215 [2024-12-04 14:12:00.608247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.475 [2024-12-04 14:12:00.793185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.734 Running I/O for 5 seconds... 00:11:05.007 00:11:05.007 Latency(us) 00:11:05.007 [2024-12-04T14:12:06.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.007 [2024-12-04T14:12:06.472Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:05.007 null0 : 5.00 190500.38 744.14 0.00 0.00 333.65 111.85 500.97 00:11:05.007 [2024-12-04T14:12:06.472Z] =================================================================================================================== 00:11:05.007 [2024-12-04T14:12:06.472Z] Total : 190500.38 744.14 0.00 0.00 333.65 111.85 500.97 00:11:05.267 14:12:06 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:05.267 14:12:06 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:05.267 14:12:06 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:05.267 14:12:06 -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:05.267 14:12:06 -- dd/common.sh@31 -- # xtrace_disable 00:11:05.267 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:11:05.526 { 00:11:05.526 "subsystems": [ 00:11:05.526 { 00:11:05.526 "subsystem": "bdev", 00:11:05.526 "config": [ 00:11:05.526 { 00:11:05.526 "params": { 00:11:05.526 "io_mechanism": "io_uring", 00:11:05.526 "filename": "/dev/nullb0", 00:11:05.526 "name": "null0" 00:11:05.526 }, 00:11:05.526 "method": "bdev_xnvme_create" 00:11:05.526 }, 00:11:05.526 { 00:11:05.526 "method": "bdev_wait_for_examine" 00:11:05.526 } 00:11:05.526 ] 00:11:05.526 } 00:11:05.526 ] 00:11:05.526 } 00:11:05.526 [2024-12-04 14:12:06.763785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:05.526 [2024-12-04 14:12:06.764006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67034 ] 00:11:05.526 [2024-12-04 14:12:06.912049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.784 [2024-12-04 14:12:07.049542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.784 Running I/O for 5 seconds... 00:11:11.052 00:11:11.052 Latency(us) 00:11:11.052 [2024-12-04T14:12:12.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.052 [2024-12-04T14:12:12.517Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:11.052 null0 : 5.00 238002.86 929.70 0.00 0.00 266.77 152.02 737.28 00:11:11.052 [2024-12-04T14:12:12.517Z] =================================================================================================================== 00:11:11.052 [2024-12-04T14:12:12.517Z] Total : 238002.86 929.70 0.00 0.00 266.77 152.02 737.28 00:11:11.621 14:12:12 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:11.622 14:12:12 -- dd/common.sh@195 -- # modprobe -r null_blk 00:11:11.622 00:11:11.622 real 0m12.508s 00:11:11.622 user 0m10.089s 00:11:11.622 sys 0m2.177s 00:11:11.622 ************************************ 00:11:11.622 END TEST xnvme_bdevperf 00:11:11.622 ************************************ 00:11:11.622 14:12:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:11.622 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:11:11.622 ************************************ 00:11:11.622 END TEST nvme_xnvme 00:11:11.622 ************************************ 00:11:11.622 00:11:11.622 real 0m38.758s 00:11:11.622 user 0m33.184s 00:11:11.622 sys 0m4.778s 00:11:11.622 14:12:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:11.622 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:11:11.622 14:12:12 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:11.622 14:12:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:11.622 14:12:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.622 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:11:11.622 ************************************ 00:11:11.622 START TEST blockdev_xnvme 00:11:11.622 ************************************ 00:11:11.622 14:12:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:11.622 * Looking for test storage... 00:11:11.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:11.622 14:12:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:11.622 14:12:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:11.622 14:12:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:11.884 14:12:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:11.884 14:12:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:11.884 14:12:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:11.884 14:12:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:11.884 14:12:13 -- scripts/common.sh@335 -- # IFS=.-: 00:11:11.884 14:12:13 -- scripts/common.sh@335 -- # read -ra ver1 00:11:11.884 14:12:13 -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.884 14:12:13 -- scripts/common.sh@336 -- # read -ra ver2 00:11:11.884 14:12:13 -- scripts/common.sh@337 -- # local 'op=<' 00:11:11.884 14:12:13 -- scripts/common.sh@339 -- # ver1_l=2 00:11:11.884 14:12:13 -- scripts/common.sh@340 -- # ver2_l=1 00:11:11.884 14:12:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:11.884 14:12:13 -- scripts/common.sh@343 -- # case "$op" in 00:11:11.884 14:12:13 -- scripts/common.sh@344 -- # : 1 00:11:11.884 14:12:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:11.884 14:12:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.884 14:12:13 -- scripts/common.sh@364 -- # decimal 1 00:11:11.884 14:12:13 -- scripts/common.sh@352 -- # local d=1 00:11:11.884 14:12:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.884 14:12:13 -- scripts/common.sh@354 -- # echo 1 00:11:11.884 14:12:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:11.884 14:12:13 -- scripts/common.sh@365 -- # decimal 2 00:11:11.884 14:12:13 -- scripts/common.sh@352 -- # local d=2 00:11:11.884 14:12:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.884 14:12:13 -- scripts/common.sh@354 -- # echo 2 00:11:11.884 14:12:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:11.884 14:12:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:11.884 14:12:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:11.884 14:12:13 -- scripts/common.sh@367 -- # return 0 00:11:11.884 14:12:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.884 14:12:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:11.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.884 --rc genhtml_branch_coverage=1 00:11:11.884 --rc genhtml_function_coverage=1 00:11:11.884 --rc genhtml_legend=1 00:11:11.884 --rc geninfo_all_blocks=1 00:11:11.884 --rc geninfo_unexecuted_blocks=1 00:11:11.884 00:11:11.884 ' 00:11:11.884 14:12:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:11.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.884 --rc genhtml_branch_coverage=1 00:11:11.884 --rc genhtml_function_coverage=1 00:11:11.884 --rc genhtml_legend=1 00:11:11.884 --rc geninfo_all_blocks=1 00:11:11.884 --rc geninfo_unexecuted_blocks=1 00:11:11.884 00:11:11.884 ' 00:11:11.884 14:12:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:11.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.884 --rc genhtml_branch_coverage=1 00:11:11.884 --rc genhtml_function_coverage=1 00:11:11.884 --rc genhtml_legend=1 00:11:11.884 --rc geninfo_all_blocks=1 00:11:11.884 --rc geninfo_unexecuted_blocks=1 00:11:11.884 00:11:11.884 ' 00:11:11.884 14:12:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:11.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.884 --rc genhtml_branch_coverage=1 00:11:11.884 --rc genhtml_function_coverage=1 00:11:11.884 --rc genhtml_legend=1 00:11:11.884 --rc geninfo_all_blocks=1 00:11:11.884 --rc geninfo_unexecuted_blocks=1 00:11:11.884 00:11:11.884 ' 00:11:11.884 14:12:13 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:11.884 14:12:13 -- bdev/nbd_common.sh@6 -- # set -e 00:11:11.884 14:12:13 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:11.884 14:12:13 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:11.884 14:12:13 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:11.884 14:12:13 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:11.884 14:12:13 -- bdev/blockdev.sh@18 -- # : 00:11:11.884 14:12:13 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:11.884 14:12:13 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:11.884 14:12:13 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:11.884 14:12:13 -- bdev/blockdev.sh@672 -- # uname -s 00:11:11.884 14:12:13 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:11.884 14:12:13 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:11.884 14:12:13 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:11:11.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.884 14:12:13 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:11.884 14:12:13 -- bdev/blockdev.sh@682 -- # dek= 00:11:11.884 14:12:13 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:11.884 14:12:13 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:11.884 14:12:13 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:11.884 14:12:13 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:11:11.884 14:12:13 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:11:11.884 14:12:13 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:11.884 14:12:13 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67174 00:11:11.884 14:12:13 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:11.884 14:12:13 -- bdev/blockdev.sh@47 -- # waitforlisten 67174 00:11:11.884 14:12:13 -- common/autotest_common.sh@829 -- # '[' -z 67174 ']' 00:11:11.884 14:12:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.884 14:12:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:11.884 14:12:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.884 14:12:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:11.884 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:11:11.884 14:12:13 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:11.884 [2024-12-04 14:12:13.209729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:11.884 [2024-12-04 14:12:13.209837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67174 ] 00:11:12.146 [2024-12-04 14:12:13.358679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.146 [2024-12-04 14:12:13.531971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:12.146 [2024-12-04 14:12:13.532344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.532 14:12:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.532 14:12:14 -- common/autotest_common.sh@862 -- # return 0 00:11:13.532 14:12:14 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:13.532 14:12:14 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:11:13.532 14:12:14 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:11:13.532 14:12:14 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:11:13.532 14:12:14 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:13.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.792 Waiting for block devices as requested 00:11:13.792 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.792 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.052 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.052 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.328 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:19.328 14:12:20 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:11:19.328 14:12:20 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:11:19.328 14:12:20 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:11:19.328 14:12:20 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:11:19.328 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.328 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:11:19.328 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:11:19.328 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:11:19.328 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.328 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.328 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:11:19.328 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:11:19.328 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.329 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.329 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:11:19.329 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:11:19.329 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.329 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:11:19.329 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:11:19.329 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.329 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:11:19.329 14:12:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:19.329 14:12:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:11:19.329 14:12:20 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:11:19.329 14:12:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.329 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 14:12:20 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:19.329 nvme0n1 00:11:19.329 nvme1n1 00:11:19.329 nvme1n2 00:11:19.329 nvme1n3 00:11:19.329 nvme2n1 00:11:19.329 nvme3n1 00:11:19.329 14:12:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:19.329 14:12:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.329 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 14:12:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@738 -- # cat 00:11:19.329 14:12:20 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:19.329 14:12:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.329 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 14:12:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:19.329 14:12:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.329 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 14:12:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:19.329 14:12:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.329 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 14:12:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:19.329 14:12:20 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:19.329 14:12:20 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:19.329 14:12:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.329 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:19.329 14:12:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.329 14:12:20 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:19.329 14:12:20 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:19.329 14:12:20 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4bdcc656-37de-4691-b8f4-ec2263fb3c50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4bdcc656-37de-4691-b8f4-ec2263fb3c50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9dc875ab-47e7-4235-8969-5cb1e2dad7da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9dc875ab-47e7-4235-8969-5cb1e2dad7da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "03818936-7b30-4d9e-9b72-a877d2ca116f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "03818936-7b30-4d9e-9b72-a877d2ca116f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "72fc9bf4-603a-4144-b6fa-c8c326015bde"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72fc9bf4-603a-4144-b6fa-c8c326015bde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f5bca23c-c464-44ff-8f5f-8788a00dfaa7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f5bca23c-c464-44ff-8f5f-8788a00dfaa7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8185c1f3-7468-49ef-92b7-8eaa59cb4e8e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8185c1f3-7468-49ef-92b7-8eaa59cb4e8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:19.329 14:12:20 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:19.329 14:12:20 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:11:19.329 14:12:20 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:19.329 14:12:20 -- bdev/blockdev.sh@752 -- # killprocess 67174 00:11:19.329 14:12:20 -- common/autotest_common.sh@936 -- # '[' -z 67174 ']' 00:11:19.329 14:12:20 -- common/autotest_common.sh@940 -- # kill -0 67174 00:11:19.329 14:12:20 -- common/autotest_common.sh@941 -- # uname 00:11:19.329 14:12:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:19.329 14:12:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67174 00:11:19.329 killing process with pid 67174 00:11:19.329 14:12:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:19.329 14:12:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:19.329 14:12:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67174' 00:11:19.329 14:12:20 -- common/autotest_common.sh@955 -- # kill 67174 00:11:19.330 14:12:20 -- common/autotest_common.sh@960 -- # wait 67174 00:11:20.754 14:12:21 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:20.754 14:12:21 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:20.754 14:12:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:20.754 14:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.754 14:12:21 -- common/autotest_common.sh@10 -- # set +x 00:11:20.754 ************************************ 00:11:20.754 START TEST bdev_hello_world 00:11:20.754 ************************************ 00:11:20.754 14:12:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:20.754 [2024-12-04 14:12:21.924629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:20.754 [2024-12-04 14:12:21.924711] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67559 ] 00:11:20.754 [2024-12-04 14:12:22.060186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.754 [2024-12-04 14:12:22.197291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.014 [2024-12-04 14:12:22.477701] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:21.014 [2024-12-04 14:12:22.477741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:21.014 [2024-12-04 14:12:22.477753] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:21.272 [2024-12-04 14:12:22.479195] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:21.272 [2024-12-04 14:12:22.479513] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:21.272 [2024-12-04 14:12:22.479530] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:21.272 [2024-12-04 14:12:22.479777] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:21.272 00:11:21.272 [2024-12-04 14:12:22.479790] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:21.842 00:11:21.842 real 0m1.198s 00:11:21.842 user 0m0.947s 00:11:21.842 sys 0m0.142s 00:11:21.842 ************************************ 00:11:21.842 END TEST bdev_hello_world 00:11:21.842 ************************************ 00:11:21.842 14:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.842 14:12:23 -- common/autotest_common.sh@10 -- # set +x 00:11:21.842 14:12:23 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:21.842 14:12:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.842 14:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.842 14:12:23 -- common/autotest_common.sh@10 -- # set +x 00:11:21.842 ************************************ 00:11:21.842 START TEST bdev_bounds 00:11:21.842 ************************************ 00:11:21.842 14:12:23 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:21.842 14:12:23 -- bdev/blockdev.sh@288 -- # bdevio_pid=67596 00:11:21.842 Process bdevio pid: 67596 00:11:21.842 14:12:23 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:21.842 14:12:23 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67596' 00:11:21.842 14:12:23 -- bdev/blockdev.sh@291 -- # waitforlisten 67596 00:11:21.842 14:12:23 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:21.842 14:12:23 -- common/autotest_common.sh@829 -- # '[' -z 67596 ']' 00:11:21.842 14:12:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.842 14:12:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.842 14:12:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.842 14:12:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.842 14:12:23 -- common/autotest_common.sh@10 -- # set +x 00:11:21.842 [2024-12-04 14:12:23.191057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:21.842 [2024-12-04 14:12:23.191181] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67596 ] 00:11:22.102 [2024-12-04 14:12:23.334082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.102 [2024-12-04 14:12:23.472841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.102 [2024-12-04 14:12:23.473075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.102 [2024-12-04 14:12:23.473108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.670 14:12:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.670 14:12:23 -- common/autotest_common.sh@862 -- # return 0 00:11:22.670 14:12:23 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:22.670 I/O targets: 00:11:22.670 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:22.670 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:22.670 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:22.670 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:22.670 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:22.670 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:22.670 00:11:22.670 00:11:22.670 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.670 http://cunit.sourceforge.net/ 00:11:22.670 00:11:22.670 00:11:22.670 Suite: bdevio tests on: nvme3n1 00:11:22.670 Test: blockdev write read block ...passed 00:11:22.670 Test: blockdev write zeroes read block ...passed 00:11:22.670 Test: blockdev write zeroes read no split ...passed 00:11:22.670 Test: blockdev write zeroes read split ...passed 00:11:22.670 Test: blockdev write zeroes read split partial ...passed 00:11:22.670 Test: blockdev reset ...passed 00:11:22.670 Test: blockdev write read 8 blocks ...passed 00:11:22.670 Test: blockdev write read size > 128k ...passed 00:11:22.670 Test: blockdev write read invalid size ...passed 00:11:22.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.670 Test: blockdev write read max offset ...passed 00:11:22.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.670 Test: blockdev writev readv 8 blocks ...passed 00:11:22.670 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.670 Test: blockdev writev readv block ...passed 00:11:22.670 Test: blockdev writev readv size > 128k ...passed 00:11:22.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.670 Test: blockdev comparev and writev ...passed 00:11:22.670 Test: blockdev nvme passthru rw ...passed 00:11:22.670 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.670 Test: blockdev nvme admin passthru ...passed 00:11:22.670 Test: blockdev copy ...passed 00:11:22.670 Suite: bdevio tests on: nvme2n1 00:11:22.670 Test: blockdev write read block ...passed 00:11:22.670 Test: blockdev write zeroes read block ...passed 00:11:22.670 Test: blockdev write zeroes read no split ...passed 00:11:22.670 Test: blockdev write zeroes read split ...passed 00:11:22.929 Test: blockdev write zeroes read split partial ...passed 00:11:22.929 Test: blockdev reset ...passed 00:11:22.929 Test: blockdev write read 8 blocks ...passed 00:11:22.929 Test: blockdev write read size > 128k ...passed 00:11:22.929 Test: blockdev write read invalid size ...passed 00:11:22.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.929 Test: blockdev write read max offset ...passed 00:11:22.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.929 Test: blockdev writev readv 8 blocks ...passed 00:11:22.929 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.929 Test: blockdev writev readv block ...passed 00:11:22.929 Test: blockdev writev readv size > 128k ...passed 00:11:22.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.929 Test: blockdev comparev and writev ...passed 00:11:22.929 Test: blockdev nvme passthru rw ...passed 00:11:22.929 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.929 Test: blockdev nvme admin passthru ...passed 00:11:22.929 Test: blockdev copy ...passed 00:11:22.929 Suite: bdevio tests on: nvme1n3 00:11:22.929 Test: blockdev write read block ...passed 00:11:22.929 Test: blockdev write zeroes read block ...passed 00:11:22.929 Test: blockdev write zeroes read no split ...passed 00:11:22.929 Test: blockdev write zeroes read split ...passed 00:11:22.929 Test: blockdev write zeroes read split partial ...passed 00:11:22.929 Test: blockdev reset ...passed 00:11:22.929 Test: blockdev write read 8 blocks ...passed 00:11:22.929 Test: blockdev write read size > 128k ...passed 00:11:22.929 Test: blockdev write read invalid size ...passed 00:11:22.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.929 Test: blockdev write read max offset ...passed 00:11:22.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.929 Test: blockdev writev readv 8 blocks ...passed 00:11:22.929 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.929 Test: blockdev writev readv block ...passed 00:11:22.929 Test: blockdev writev readv size > 128k ...passed 00:11:22.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.929 Test: blockdev comparev and writev ...passed 00:11:22.929 Test: blockdev nvme passthru rw ...passed 00:11:22.929 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.929 Test: blockdev nvme admin passthru ...passed 00:11:22.929 Test: blockdev copy ...passed 00:11:22.929 Suite: bdevio tests on: nvme1n2 00:11:22.929 Test: blockdev write read block ...passed 00:11:22.929 Test: blockdev write zeroes read block ...passed 00:11:22.929 Test: blockdev write zeroes read no split ...passed 00:11:22.929 Test: blockdev write zeroes read split ...passed 00:11:22.929 Test: blockdev write zeroes read split partial ...passed 00:11:22.929 Test: blockdev reset ...passed 00:11:22.929 Test: blockdev write read 8 blocks ...passed 00:11:22.929 Test: blockdev write read size > 128k ...passed 00:11:22.929 Test: blockdev write read invalid size ...passed 00:11:22.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.929 Test: blockdev write read max offset ...passed 00:11:22.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.929 Test: blockdev writev readv 8 blocks ...passed 00:11:22.929 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.929 Test: blockdev writev readv block ...passed 00:11:22.929 Test: blockdev writev readv size > 128k ...passed 00:11:22.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.929 Test: blockdev comparev and writev ...passed 00:11:22.929 Test: blockdev nvme passthru rw ...passed 00:11:22.929 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.930 Test: blockdev nvme admin passthru ...passed 00:11:22.930 Test: blockdev copy ...passed 00:11:22.930 Suite: bdevio tests on: nvme1n1 00:11:22.930 Test: blockdev write read block ...passed 00:11:22.930 Test: blockdev write zeroes read block ...passed 00:11:22.930 Test: blockdev write zeroes read no split ...passed 00:11:22.930 Test: blockdev write zeroes read split ...passed 00:11:22.930 Test: blockdev write zeroes read split partial ...passed 00:11:22.930 Test: blockdev reset ...passed 00:11:22.930 Test: blockdev write read 8 blocks ...passed 00:11:22.930 Test: blockdev write read size > 128k ...passed 00:11:22.930 Test: blockdev write read invalid size ...passed 00:11:22.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.930 Test: blockdev write read max offset ...passed 00:11:22.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.930 Test: blockdev writev readv 8 blocks ...passed 00:11:22.930 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.930 Test: blockdev writev readv block ...passed 00:11:22.930 Test: blockdev writev readv size > 128k ...passed 00:11:22.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.930 Test: blockdev comparev and writev ...passed 00:11:22.930 Test: blockdev nvme passthru rw ...passed 00:11:22.930 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.930 Test: blockdev nvme admin passthru ...passed 00:11:22.930 Test: blockdev copy ...passed 00:11:22.930 Suite: bdevio tests on: nvme0n1 00:11:22.930 Test: blockdev write read block ...passed 00:11:22.930 Test: blockdev write zeroes read block ...passed 00:11:22.930 Test: blockdev write zeroes read no split ...passed 00:11:22.930 Test: blockdev write zeroes read split ...passed 00:11:22.930 Test: blockdev write zeroes read split partial ...passed 00:11:22.930 Test: blockdev reset ...passed 00:11:22.930 Test: blockdev write read 8 blocks ...passed 00:11:22.930 Test: blockdev write read size > 128k ...passed 00:11:22.930 Test: blockdev write read invalid size ...passed 00:11:22.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.930 Test: blockdev write read max offset ...passed 00:11:22.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.930 Test: blockdev writev readv 8 blocks ...passed 00:11:22.930 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.930 Test: blockdev writev readv block ...passed 00:11:22.930 Test: blockdev writev readv size > 128k ...passed 00:11:22.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.930 Test: blockdev comparev and writev ...passed 00:11:22.930 Test: blockdev nvme passthru rw ...passed 00:11:22.930 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.930 Test: blockdev nvme admin passthru ...passed 00:11:22.930 Test: blockdev copy ...passed 00:11:22.930 00:11:22.930 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.930 suites 6 6 n/a 0 0 00:11:22.930 tests 138 138 138 0 0 00:11:22.930 asserts 780 780 780 0 n/a 00:11:22.930 00:11:22.930 Elapsed time = 0.845 seconds 00:11:22.930 0 00:11:22.930 14:12:24 -- bdev/blockdev.sh@293 -- # killprocess 67596 00:11:22.930 14:12:24 -- common/autotest_common.sh@936 -- # '[' -z 67596 ']' 00:11:22.930 14:12:24 -- common/autotest_common.sh@940 -- # kill -0 67596 00:11:22.930 14:12:24 -- common/autotest_common.sh@941 -- # uname 00:11:22.930 14:12:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.930 14:12:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67596 00:11:23.189 killing process with pid 67596 00:11:23.189 14:12:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:23.189 14:12:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:23.189 14:12:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67596' 00:11:23.189 14:12:24 -- common/autotest_common.sh@955 -- # kill 67596 00:11:23.189 14:12:24 -- common/autotest_common.sh@960 -- # wait 67596 00:11:23.759 14:12:25 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:23.759 00:11:23.759 real 0m1.878s 00:11:23.759 user 0m4.453s 00:11:23.759 sys 0m0.263s 00:11:23.759 ************************************ 00:11:23.759 END TEST bdev_bounds 00:11:23.759 ************************************ 00:11:23.759 14:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.759 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 14:12:25 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:23.759 14:12:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:23.759 14:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.759 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 ************************************ 00:11:23.759 START TEST bdev_nbd 00:11:23.759 ************************************ 00:11:23.759 14:12:25 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:11:23.759 14:12:25 -- bdev/blockdev.sh@298 -- # uname -s 00:11:23.759 14:12:25 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:23.759 14:12:25 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.759 14:12:25 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:23.759 14:12:25 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:23.759 14:12:25 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:23.759 14:12:25 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:11:23.759 14:12:25 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:23.759 14:12:25 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:23.759 14:12:25 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:23.759 14:12:25 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:11:23.759 14:12:25 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:23.759 14:12:25 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:23.759 14:12:25 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:23.759 14:12:25 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:23.759 14:12:25 -- bdev/blockdev.sh@316 -- # nbd_pid=67652 00:11:23.759 14:12:25 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:23.759 14:12:25 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:23.759 14:12:25 -- bdev/blockdev.sh@318 -- # waitforlisten 67652 /var/tmp/spdk-nbd.sock 00:11:23.759 14:12:25 -- common/autotest_common.sh@829 -- # '[' -z 67652 ']' 00:11:23.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:23.759 14:12:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:23.759 14:12:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:23.759 14:12:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:23.759 14:12:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:23.759 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 [2024-12-04 14:12:25.127424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.759 [2024-12-04 14:12:25.127529] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.018 [2024-12-04 14:12:25.274169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.018 [2024-12-04 14:12:25.412640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.585 14:12:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.585 14:12:25 -- common/autotest_common.sh@862 -- # return 0 00:11:24.585 14:12:25 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@24 -- # local i 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:24.585 14:12:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:11:24.846 14:12:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:24.846 14:12:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:24.846 14:12:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:24.846 14:12:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:24.846 14:12:26 -- common/autotest_common.sh@867 -- # local i 00:11:24.846 14:12:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:24.846 14:12:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:24.846 14:12:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:24.846 14:12:26 -- common/autotest_common.sh@871 -- # break 00:11:24.846 14:12:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:24.846 14:12:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:24.846 14:12:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.846 1+0 records in 00:11:24.846 1+0 records out 00:11:24.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727383 s, 5.6 MB/s 00:11:24.846 14:12:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.846 14:12:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:24.846 14:12:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.846 14:12:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:24.846 14:12:26 -- common/autotest_common.sh@887 -- # return 0 00:11:24.846 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.846 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:24.846 14:12:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:25.109 14:12:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:25.109 14:12:26 -- common/autotest_common.sh@867 -- # local i 00:11:25.109 14:12:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:25.109 14:12:26 -- common/autotest_common.sh@871 -- # break 00:11:25.109 14:12:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.109 1+0 records in 00:11:25.109 1+0 records out 00:11:25.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955139 s, 4.3 MB/s 00:11:25.109 14:12:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.109 14:12:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:25.109 14:12:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.109 14:12:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:25.109 14:12:26 -- common/autotest_common.sh@887 -- # return 0 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:25.109 14:12:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:25.109 14:12:26 -- common/autotest_common.sh@867 -- # local i 00:11:25.109 14:12:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:25.109 14:12:26 -- common/autotest_common.sh@871 -- # break 00:11:25.109 14:12:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:25.109 14:12:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.109 1+0 records in 00:11:25.109 1+0 records out 00:11:25.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798286 s, 5.1 MB/s 00:11:25.109 14:12:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.109 14:12:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:25.109 14:12:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.109 14:12:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:25.109 14:12:26 -- common/autotest_common.sh@887 -- # return 0 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:25.109 14:12:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:11:25.371 14:12:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:25.371 14:12:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:25.371 14:12:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:25.371 14:12:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:25.371 14:12:26 -- common/autotest_common.sh@867 -- # local i 00:11:25.371 14:12:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:25.371 14:12:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:25.372 14:12:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:25.372 14:12:26 -- common/autotest_common.sh@871 -- # break 00:11:25.372 14:12:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:25.372 14:12:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:25.372 14:12:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.372 1+0 records in 00:11:25.372 1+0 records out 00:11:25.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877895 s, 4.7 MB/s 00:11:25.372 14:12:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.372 14:12:26 -- common/autotest_common.sh@884 -- # size=4096 00:11:25.372 14:12:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.372 14:12:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:25.372 14:12:26 -- common/autotest_common.sh@887 -- # return 0 00:11:25.372 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.372 14:12:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:25.372 14:12:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:11:25.633 14:12:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:25.633 14:12:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:25.633 14:12:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:25.633 14:12:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:25.633 14:12:27 -- common/autotest_common.sh@867 -- # local i 00:11:25.633 14:12:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:25.633 14:12:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:25.633 14:12:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:25.633 14:12:27 -- common/autotest_common.sh@871 -- # break 00:11:25.633 14:12:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:25.633 14:12:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:25.633 14:12:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.633 1+0 records in 00:11:25.633 1+0 records out 00:11:25.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161434 s, 2.5 MB/s 00:11:25.633 14:12:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.633 14:12:27 -- common/autotest_common.sh@884 -- # size=4096 00:11:25.633 14:12:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.633 14:12:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:25.633 14:12:27 -- common/autotest_common.sh@887 -- # return 0 00:11:25.633 14:12:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.633 14:12:27 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:25.633 14:12:27 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:11:25.895 14:12:27 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:25.895 14:12:27 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:25.895 14:12:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:25.895 14:12:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:25.895 14:12:27 -- common/autotest_common.sh@867 -- # local i 00:11:25.895 14:12:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:25.895 14:12:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:25.895 14:12:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:25.895 14:12:27 -- common/autotest_common.sh@871 -- # break 00:11:25.895 14:12:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:25.895 14:12:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:25.895 14:12:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.895 1+0 records in 00:11:25.895 1+0 records out 00:11:25.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101672 s, 4.0 MB/s 00:11:25.895 14:12:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.895 14:12:27 -- common/autotest_common.sh@884 -- # size=4096 00:11:25.895 14:12:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.895 14:12:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:25.895 14:12:27 -- common/autotest_common.sh@887 -- # return 0 00:11:25.895 14:12:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.895 14:12:27 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:25.895 14:12:27 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd0", 00:11:26.156 "bdev_name": "nvme0n1" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd1", 00:11:26.156 "bdev_name": "nvme1n1" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd2", 00:11:26.156 "bdev_name": "nvme1n2" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd3", 00:11:26.156 "bdev_name": "nvme1n3" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd4", 00:11:26.156 "bdev_name": "nvme2n1" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd5", 00:11:26.156 "bdev_name": "nvme3n1" 00:11:26.156 } 00:11:26.156 ]' 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd0", 00:11:26.156 "bdev_name": "nvme0n1" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd1", 00:11:26.156 "bdev_name": "nvme1n1" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd2", 00:11:26.156 "bdev_name": "nvme1n2" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd3", 00:11:26.156 "bdev_name": "nvme1n3" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd4", 00:11:26.156 "bdev_name": "nvme2n1" 00:11:26.156 }, 00:11:26.156 { 00:11:26.156 "nbd_device": "/dev/nbd5", 00:11:26.156 "bdev_name": "nvme3n1" 00:11:26.156 } 00:11:26.156 ]' 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@51 -- # local i 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.156 14:12:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@41 -- # break 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@41 -- # break 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.418 14:12:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@41 -- # break 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.679 14:12:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@41 -- # break 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.940 14:12:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@41 -- # break 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@41 -- # break 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.201 14:12:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@65 -- # true 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@65 -- # count=0 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@122 -- # count=0 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@127 -- # return 0 00:11:27.463 14:12:28 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@12 -- # local i 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:27.463 14:12:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:11:27.725 /dev/nbd0 00:11:27.725 14:12:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:27.725 14:12:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:27.725 14:12:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:27.725 14:12:29 -- common/autotest_common.sh@867 -- # local i 00:11:27.725 14:12:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.725 14:12:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.725 14:12:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:27.725 14:12:29 -- common/autotest_common.sh@871 -- # break 00:11:27.725 14:12:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.725 14:12:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.725 14:12:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.725 1+0 records in 00:11:27.725 1+0 records out 00:11:27.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100878 s, 4.1 MB/s 00:11:27.725 14:12:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.725 14:12:29 -- common/autotest_common.sh@884 -- # size=4096 00:11:27.725 14:12:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.725 14:12:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.725 14:12:29 -- common/autotest_common.sh@887 -- # return 0 00:11:27.725 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.725 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:27.725 14:12:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:11:27.986 /dev/nbd1 00:11:27.986 14:12:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:27.986 14:12:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:27.986 14:12:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:27.986 14:12:29 -- common/autotest_common.sh@867 -- # local i 00:11:27.986 14:12:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.986 14:12:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.986 14:12:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:27.986 14:12:29 -- common/autotest_common.sh@871 -- # break 00:11:27.986 14:12:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.986 14:12:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.986 14:12:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.986 1+0 records in 00:11:27.986 1+0 records out 00:11:27.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750123 s, 5.5 MB/s 00:11:27.986 14:12:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.986 14:12:29 -- common/autotest_common.sh@884 -- # size=4096 00:11:27.986 14:12:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.986 14:12:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.986 14:12:29 -- common/autotest_common.sh@887 -- # return 0 00:11:27.986 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.986 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:27.986 14:12:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:11:28.247 /dev/nbd10 00:11:28.247 14:12:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:28.247 14:12:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:28.247 14:12:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:28.247 14:12:29 -- common/autotest_common.sh@867 -- # local i 00:11:28.247 14:12:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.247 14:12:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.247 14:12:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:28.247 14:12:29 -- common/autotest_common.sh@871 -- # break 00:11:28.247 14:12:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.247 14:12:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.248 14:12:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.248 1+0 records in 00:11:28.248 1+0 records out 00:11:28.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100736 s, 4.1 MB/s 00:11:28.248 14:12:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.248 14:12:29 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.248 14:12:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.248 14:12:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.248 14:12:29 -- common/autotest_common.sh@887 -- # return 0 00:11:28.248 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.248 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:28.248 14:12:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:11:28.509 /dev/nbd11 00:11:28.509 14:12:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:28.509 14:12:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:28.509 14:12:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:28.509 14:12:29 -- common/autotest_common.sh@867 -- # local i 00:11:28.509 14:12:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.509 14:12:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.509 14:12:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:28.509 14:12:29 -- common/autotest_common.sh@871 -- # break 00:11:28.509 14:12:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.509 14:12:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.509 14:12:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.509 1+0 records in 00:11:28.509 1+0 records out 00:11:28.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971207 s, 4.2 MB/s 00:11:28.509 14:12:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.509 14:12:29 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.509 14:12:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.509 14:12:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.509 14:12:29 -- common/autotest_common.sh@887 -- # return 0 00:11:28.509 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.509 14:12:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:28.509 14:12:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:11:28.771 /dev/nbd12 00:11:28.771 14:12:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:28.771 14:12:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:28.771 14:12:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:28.771 14:12:30 -- common/autotest_common.sh@867 -- # local i 00:11:28.771 14:12:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:28.771 14:12:30 -- common/autotest_common.sh@871 -- # break 00:11:28.771 14:12:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.771 1+0 records in 00:11:28.771 1+0 records out 00:11:28.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135201 s, 3.0 MB/s 00:11:28.771 14:12:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.771 14:12:30 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.771 14:12:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.771 14:12:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.771 14:12:30 -- common/autotest_common.sh@887 -- # return 0 00:11:28.771 14:12:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.771 14:12:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:28.771 14:12:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:11:28.771 /dev/nbd13 00:11:28.771 14:12:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:28.771 14:12:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:28.771 14:12:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:28.771 14:12:30 -- common/autotest_common.sh@867 -- # local i 00:11:28.771 14:12:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:28.771 14:12:30 -- common/autotest_common.sh@871 -- # break 00:11:28.771 14:12:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.771 14:12:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.771 1+0 records in 00:11:28.771 1+0 records out 00:11:28.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488985 s, 8.4 MB/s 00:11:28.771 14:12:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.771 14:12:30 -- common/autotest_common.sh@884 -- # size=4096 00:11:28.771 14:12:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.771 14:12:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.772 14:12:30 -- common/autotest_common.sh@887 -- # return 0 00:11:28.772 14:12:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.772 14:12:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:28.772 14:12:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.772 14:12:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.772 14:12:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd0", 00:11:29.031 "bdev_name": "nvme0n1" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd1", 00:11:29.031 "bdev_name": "nvme1n1" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd10", 00:11:29.031 "bdev_name": "nvme1n2" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd11", 00:11:29.031 "bdev_name": "nvme1n3" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd12", 00:11:29.031 "bdev_name": "nvme2n1" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd13", 00:11:29.031 "bdev_name": "nvme3n1" 00:11:29.031 } 00:11:29.031 ]' 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd0", 00:11:29.031 "bdev_name": "nvme0n1" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd1", 00:11:29.031 "bdev_name": "nvme1n1" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd10", 00:11:29.031 "bdev_name": "nvme1n2" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd11", 00:11:29.031 "bdev_name": "nvme1n3" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd12", 00:11:29.031 "bdev_name": "nvme2n1" 00:11:29.031 }, 00:11:29.031 { 00:11:29.031 "nbd_device": "/dev/nbd13", 00:11:29.031 "bdev_name": "nvme3n1" 00:11:29.031 } 00:11:29.031 ]' 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:29.031 /dev/nbd1 00:11:29.031 /dev/nbd10 00:11:29.031 /dev/nbd11 00:11:29.031 /dev/nbd12 00:11:29.031 /dev/nbd13' 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:29.031 /dev/nbd1 00:11:29.031 /dev/nbd10 00:11:29.031 /dev/nbd11 00:11:29.031 /dev/nbd12 00:11:29.031 /dev/nbd13' 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@65 -- # count=6 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@66 -- # echo 6 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@95 -- # count=6 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:29.031 14:12:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.032 14:12:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:29.032 14:12:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:29.032 14:12:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:29.032 14:12:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:29.032 256+0 records in 00:11:29.032 256+0 records out 00:11:29.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763916 s, 137 MB/s 00:11:29.032 14:12:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.032 14:12:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:29.299 256+0 records in 00:11:29.299 256+0 records out 00:11:29.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0856051 s, 12.2 MB/s 00:11:29.299 14:12:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.299 14:12:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:29.299 256+0 records in 00:11:29.299 256+0 records out 00:11:29.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196707 s, 5.3 MB/s 00:11:29.299 14:12:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.299 14:12:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:29.560 256+0 records in 00:11:29.560 256+0 records out 00:11:29.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176385 s, 5.9 MB/s 00:11:29.560 14:12:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.560 14:12:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:29.822 256+0 records in 00:11:29.822 256+0 records out 00:11:29.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15114 s, 6.9 MB/s 00:11:29.822 14:12:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.822 14:12:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:30.083 256+0 records in 00:11:30.083 256+0 records out 00:11:30.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220028 s, 4.8 MB/s 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:30.083 256+0 records in 00:11:30.083 256+0 records out 00:11:30.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.072793 s, 14.4 MB/s 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@51 -- # local i 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.083 14:12:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@41 -- # break 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.345 14:12:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@41 -- # break 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.606 14:12:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@41 -- # break 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.606 14:12:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@41 -- # break 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.864 14:12:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@41 -- # break 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@41 -- # break 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.124 14:12:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@65 -- # true 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@65 -- # count=0 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@104 -- # count=0 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@109 -- # return 0 00:11:31.382 14:12:32 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:31.382 14:12:32 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:31.640 malloc_lvol_verify 00:11:31.640 14:12:32 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:31.640 9756899a-0097-43d7-b5ad-4ff831272c83 00:11:31.640 14:12:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:31.898 1dba0810-0b1e-41eb-8af9-e6864537e438 00:11:31.898 14:12:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:32.155 /dev/nbd0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:32.155 mke2fs 1.47.0 (5-Feb-2023) 00:11:32.155 Discarding device blocks: 0/4096 done 00:11:32.155 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:32.155 00:11:32.155 Allocating group tables: 0/1 done 00:11:32.155 Writing inode tables: 0/1 done 00:11:32.155 Creating journal (1024 blocks): done 00:11:32.155 Writing superblocks and filesystem accounting information: 0/1 done 00:11:32.155 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@51 -- # local i 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@41 -- # break 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:32.155 14:12:33 -- bdev/nbd_common.sh@147 -- # return 0 00:11:32.155 14:12:33 -- bdev/blockdev.sh@324 -- # killprocess 67652 00:11:32.155 14:12:33 -- common/autotest_common.sh@936 -- # '[' -z 67652 ']' 00:11:32.155 14:12:33 -- common/autotest_common.sh@940 -- # kill -0 67652 00:11:32.155 14:12:33 -- common/autotest_common.sh@941 -- # uname 00:11:32.155 14:12:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:32.155 14:12:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67652 00:11:32.155 killing process with pid 67652 00:11:32.155 14:12:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:32.155 14:12:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:32.155 14:12:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67652' 00:11:32.155 14:12:33 -- common/autotest_common.sh@955 -- # kill 67652 00:11:32.155 14:12:33 -- common/autotest_common.sh@960 -- # wait 67652 00:11:33.089 ************************************ 00:11:33.089 END TEST bdev_nbd 00:11:33.089 ************************************ 00:11:33.089 14:12:34 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:33.089 00:11:33.089 real 0m9.202s 00:11:33.089 user 0m12.576s 00:11:33.089 sys 0m3.095s 00:11:33.089 14:12:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:33.089 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:11:33.089 14:12:34 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:33.089 14:12:34 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:11:33.089 14:12:34 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:11:33.089 14:12:34 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.089 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:11:33.089 ************************************ 00:11:33.089 START TEST bdev_fio 00:11:33.089 ************************************ 00:11:33.089 14:12:34 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:11:33.089 14:12:34 -- bdev/blockdev.sh@329 -- # local env_context 00:11:33.089 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:33.089 14:12:34 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:33.089 14:12:34 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:33.089 14:12:34 -- bdev/blockdev.sh@337 -- # echo '' 00:11:33.089 14:12:34 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:33.089 14:12:34 -- bdev/blockdev.sh@337 -- # env_context= 00:11:33.089 14:12:34 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:33.089 14:12:34 -- common/autotest_common.sh@1270 -- # local workload=verify 00:11:33.089 14:12:34 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:11:33.089 14:12:34 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:33.089 14:12:34 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:33.089 14:12:34 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:33.089 14:12:34 -- common/autotest_common.sh@1290 -- # cat 00:11:33.089 14:12:34 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1303 -- # cat 00:11:33.089 14:12:34 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:11:33.089 14:12:34 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:11:33.089 14:12:34 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:33.089 14:12:34 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:11:33.089 14:12:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:33.089 14:12:34 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:11:33.090 14:12:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:33.090 14:12:34 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:11:33.090 14:12:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:33.090 14:12:34 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:11:33.090 14:12:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:33.090 14:12:34 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:11:33.090 14:12:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:33.090 14:12:34 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:11:33.090 14:12:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:33.090 14:12:34 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:11:33.090 14:12:34 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:33.090 14:12:34 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:33.090 14:12:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:33.090 14:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.090 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:11:33.090 ************************************ 00:11:33.090 START TEST bdev_fio_rw_verify 00:11:33.090 ************************************ 00:11:33.090 14:12:34 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:33.090 14:12:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:33.090 14:12:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:33.090 14:12:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:33.090 14:12:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:33.090 14:12:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:33.090 14:12:34 -- common/autotest_common.sh@1330 -- # shift 00:11:33.090 14:12:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:33.090 14:12:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:33.090 14:12:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:33.090 14:12:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:33.090 14:12:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:33.090 14:12:34 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:33.090 14:12:34 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:33.090 14:12:34 -- common/autotest_common.sh@1336 -- # break 00:11:33.090 14:12:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:33.090 14:12:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:33.349 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:33.349 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:33.349 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:33.349 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:33.349 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:33.349 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:33.349 fio-3.35 00:11:33.349 Starting 6 threads 00:11:45.633 00:11:45.633 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68035: Wed Dec 4 14:12:45 2024 00:11:45.633 read: IOPS=22.9k, BW=89.5MiB/s (93.9MB/s)(895MiB/10002msec) 00:11:45.633 slat (usec): min=2, max=1983, avg= 4.92, stdev=11.19 00:11:45.634 clat (usec): min=71, max=201681, avg=802.05, stdev=1374.58 00:11:45.634 lat (usec): min=74, max=201691, avg=806.97, stdev=1374.91 00:11:45.634 clat percentiles (usec): 00:11:45.634 | 50.000th=[ 515], 99.000th=[ 3130], 99.900th=[ 4293], 00:11:45.634 | 99.990th=[ 5538], 99.999th=[202376] 00:11:45.634 write: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(905MiB/10002msec); 0 zone resets 00:11:45.634 slat (usec): min=10, max=4095, avg=32.53, stdev=112.09 00:11:45.634 clat (usec): min=60, max=9601, avg=1013.54, stdev=804.37 00:11:45.634 lat (usec): min=74, max=9616, avg=1046.07, stdev=819.30 00:11:45.634 clat percentiles (usec): 00:11:45.634 | 50.000th=[ 701], 99.000th=[ 3654], 99.900th=[ 5080], 99.990th=[ 7439], 00:11:45.634 | 99.999th=[ 9634] 00:11:45.634 bw ( KiB/s): min=49151, max=159802, per=100.00%, avg=94448.53, stdev=6317.92, samples=114 00:11:45.634 iops : min=12287, max=39950, avg=23611.53, stdev=1579.48, samples=114 00:11:45.634 lat (usec) : 100=0.13%, 250=11.84%, 500=28.92%, 750=16.54%, 1000=9.17% 00:11:45.634 lat (msec) : 2=23.71%, 4=9.31%, 10=0.38%, 250=0.01% 00:11:45.634 cpu : usr=44.97%, sys=29.66%, ctx=7759, majf=0, minf=22233 00:11:45.634 IO depths : 1=11.3%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.634 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.634 issued rwts: total=229178,231715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.634 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:45.634 00:11:45.634 Run status group 0 (all jobs): 00:11:45.634 READ: bw=89.5MiB/s (93.9MB/s), 89.5MiB/s-89.5MiB/s (93.9MB/s-93.9MB/s), io=895MiB (939MB), run=10002-10002msec 00:11:45.634 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=905MiB (949MB), run=10002-10002msec 00:11:45.634 ----------------------------------------------------- 00:11:45.634 Suppressions used: 00:11:45.634 count bytes template 00:11:45.634 6 48 /usr/src/fio/parse.c 00:11:45.634 2383 228768 /usr/src/fio/iolog.c 00:11:45.634 1 8 libtcmalloc_minimal.so 00:11:45.634 1 904 libcrypto.so 00:11:45.634 ----------------------------------------------------- 00:11:45.634 00:11:45.634 00:11:45.634 real 0m11.716s 00:11:45.634 user 0m28.422s 00:11:45.634 sys 0m18.081s 00:11:45.634 14:12:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.634 14:12:46 -- common/autotest_common.sh@10 -- # set +x 00:11:45.634 ************************************ 00:11:45.634 END TEST bdev_fio_rw_verify 00:11:45.634 ************************************ 00:11:45.634 14:12:46 -- bdev/blockdev.sh@348 -- # rm -f 00:11:45.634 14:12:46 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:45.634 14:12:46 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:45.634 14:12:46 -- common/autotest_common.sh@1270 -- # local workload=trim 00:11:45.634 14:12:46 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:11:45.634 14:12:46 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:45.634 14:12:46 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:45.634 14:12:46 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:45.634 14:12:46 -- common/autotest_common.sh@1290 -- # cat 00:11:45.634 14:12:46 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:11:45.634 14:12:46 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "4bdcc656-37de-4691-b8f4-ec2263fb3c50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4bdcc656-37de-4691-b8f4-ec2263fb3c50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9dc875ab-47e7-4235-8969-5cb1e2dad7da"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9dc875ab-47e7-4235-8969-5cb1e2dad7da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "03818936-7b30-4d9e-9b72-a877d2ca116f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "03818936-7b30-4d9e-9b72-a877d2ca116f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "72fc9bf4-603a-4144-b6fa-c8c326015bde"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72fc9bf4-603a-4144-b6fa-c8c326015bde",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f5bca23c-c464-44ff-8f5f-8788a00dfaa7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f5bca23c-c464-44ff-8f5f-8788a00dfaa7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8185c1f3-7468-49ef-92b7-8eaa59cb4e8e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8185c1f3-7468-49ef-92b7-8eaa59cb4e8e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:11:45.634 14:12:46 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:45.634 14:12:46 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:11:45.634 14:12:46 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:45.634 /home/vagrant/spdk_repo/spdk 00:11:45.634 14:12:46 -- bdev/blockdev.sh@360 -- # popd 00:11:45.634 14:12:46 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:11:45.634 14:12:46 -- bdev/blockdev.sh@362 -- # return 0 00:11:45.634 00:11:45.634 real 0m11.885s 00:11:45.634 user 0m28.501s 00:11:45.634 sys 0m18.145s 00:11:45.634 14:12:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.634 ************************************ 00:11:45.634 END TEST bdev_fio 00:11:45.634 ************************************ 00:11:45.634 14:12:46 -- common/autotest_common.sh@10 -- # set +x 00:11:45.634 14:12:46 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:45.634 14:12:46 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:11:45.634 14:12:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.634 14:12:46 -- common/autotest_common.sh@10 -- # set +x 00:11:45.634 ************************************ 00:11:45.634 START TEST bdev_verify 00:11:45.634 ************************************ 00:11:45.634 14:12:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:45.634 [2024-12-04 14:12:46.339365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.634 [2024-12-04 14:12:46.339476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68208 ] 00:11:45.634 [2024-12-04 14:12:46.485026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:45.634 [2024-12-04 14:12:46.661209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.634 [2024-12-04 14:12:46.661375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.635 Running I/O for 5 seconds... 00:11:50.929 00:11:50.929 Latency(us) 00:11:50.929 [2024-12-04T14:12:52.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x0 length 0x20000 00:11:50.929 nvme0n1 : 5.06 2413.43 9.43 0.00 0.00 52889.76 11897.30 70173.93 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x20000 length 0x20000 00:11:50.929 nvme0n1 : 5.07 2260.52 8.83 0.00 0.00 56251.51 15829.46 68964.04 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x0 length 0x80000 00:11:50.929 nvme1n1 : 5.07 2173.63 8.49 0.00 0.00 58677.70 10939.47 83079.48 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x80000 length 0x80000 00:11:50.929 nvme1n1 : 5.08 2287.13 8.93 0.00 0.00 55718.41 6301.54 69367.34 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x0 length 0x80000 00:11:50.929 nvme1n2 : 5.07 2308.03 9.02 0.00 0.00 55168.03 15224.52 71787.13 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x80000 length 0x80000 00:11:50.929 nvme1n2 : 5.08 2272.92 8.88 0.00 0.00 56073.20 4385.87 68964.04 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x0 length 0x80000 00:11:50.929 nvme1n3 : 5.08 2292.73 8.96 0.00 0.00 55485.47 16434.41 84289.38 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x80000 length 0x80000 00:11:50.929 nvme1n3 : 5.08 2400.97 9.38 0.00 0.00 52963.38 8771.74 81869.59 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x0 length 0xbd0bd 00:11:50.929 nvme2n1 : 5.06 2309.29 9.02 0.00 0.00 55138.60 5797.42 75416.81 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:50.929 nvme2n1 : 5.09 2275.59 8.89 0.00 0.00 55798.65 6805.66 72190.42 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0x0 length 0xa0000 00:11:50.929 nvme3n1 : 5.07 2328.98 9.10 0.00 0.00 54455.46 11998.13 79449.80 00:11:50.929 [2024-12-04T14:12:52.394Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:50.929 Verification LBA range: start 0xa0000 length 0xa0000 00:11:50.929 nvme3n1 : 5.08 2481.64 9.69 0.00 0.00 51026.93 3402.83 79046.50 00:11:50.929 [2024-12-04T14:12:52.394Z] =================================================================================================================== 00:11:50.929 [2024-12-04T14:12:52.395Z] Total : 27804.86 108.61 0.00 0.00 54907.25 3402.83 84289.38 00:11:51.501 00:11:51.501 real 0m6.683s 00:11:51.501 user 0m8.588s 00:11:51.501 sys 0m3.154s 00:11:51.501 14:12:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:51.501 14:12:52 -- common/autotest_common.sh@10 -- # set +x 00:11:51.501 ************************************ 00:11:51.501 END TEST bdev_verify 00:11:51.501 ************************************ 00:11:51.761 14:12:53 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:51.761 14:12:53 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:11:51.761 14:12:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.761 14:12:53 -- common/autotest_common.sh@10 -- # set +x 00:11:51.761 ************************************ 00:11:51.761 START TEST bdev_verify_big_io 00:11:51.761 ************************************ 00:11:51.761 14:12:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:51.761 [2024-12-04 14:12:53.078589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:51.761 [2024-12-04 14:12:53.078699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68316 ] 00:11:52.021 [2024-12-04 14:12:53.227710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:52.021 [2024-12-04 14:12:53.402981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.021 [2024-12-04 14:12:53.403053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.593 Running I/O for 5 seconds... 00:11:59.182 00:11:59.182 Latency(us) 00:11:59.182 [2024-12-04T14:13:00.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.182 [2024-12-04T14:13:00.647Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:59.182 Verification LBA range: start 0x0 length 0x2000 00:11:59.182 nvme0n1 : 5.47 236.98 14.81 0.00 0.00 532251.20 39523.25 587202.56 00:11:59.182 [2024-12-04T14:13:00.647Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:59.182 Verification LBA range: start 0x2000 length 0x2000 00:11:59.182 nvme0n1 : 5.50 253.26 15.83 0.00 0.00 489608.10 52428.80 729163.62 00:11:59.182 [2024-12-04T14:13:00.648Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x0 length 0x8000 00:11:59.183 nvme1n1 : 5.47 221.75 13.86 0.00 0.00 560536.44 30449.03 580749.78 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x8000 length 0x8000 00:11:59.183 nvme1n1 : 5.50 252.83 15.80 0.00 0.00 486112.44 27021.00 522674.81 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x0 length 0x8000 00:11:59.183 nvme1n2 : 5.48 236.84 14.80 0.00 0.00 515568.89 36700.16 571070.62 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x8000 length 0x8000 00:11:59.183 nvme1n2 : 5.51 235.49 14.72 0.00 0.00 514264.62 31255.63 603334.50 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x0 length 0x8000 00:11:59.183 nvme1n3 : 5.48 236.78 14.80 0.00 0.00 507959.08 36700.16 606560.89 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x8000 length 0x8000 00:11:59.183 nvme1n3 : 5.51 224.25 14.02 0.00 0.00 532590.21 33675.42 580749.78 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x0 length 0xbd0b 00:11:59.183 nvme2n1 : 5.50 284.73 17.80 0.00 0.00 416811.28 469.46 551712.30 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:59.183 nvme2n1 : 5.54 299.65 18.73 0.00 0.00 394031.04 20366.57 551712.30 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0x0 length 0xa000 00:11:59.183 nvme3n1 : 5.48 269.73 16.86 0.00 0.00 433486.97 36498.51 574297.01 00:11:59.183 [2024-12-04T14:13:00.648Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:59.183 Verification LBA range: start 0xa000 length 0xa000 00:11:59.183 nvme3n1 : 5.54 251.08 15.69 0.00 0.00 460200.29 1531.27 645277.54 00:11:59.183 [2024-12-04T14:13:00.648Z] =================================================================================================================== 00:11:59.183 [2024-12-04T14:13:00.648Z] Total : 3003.38 187.71 0.00 0.00 482502.82 469.46 729163.62 00:11:59.183 00:11:59.183 real 0m7.357s 00:11:59.183 user 0m13.236s 00:11:59.183 sys 0m0.479s 00:11:59.183 14:13:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:59.183 ************************************ 00:11:59.183 END TEST bdev_verify_big_io 00:11:59.183 ************************************ 00:11:59.183 14:13:00 -- common/autotest_common.sh@10 -- # set +x 00:11:59.183 14:13:00 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.183 14:13:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:59.183 14:13:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.183 14:13:00 -- common/autotest_common.sh@10 -- # set +x 00:11:59.183 ************************************ 00:11:59.183 START TEST bdev_write_zeroes 00:11:59.183 ************************************ 00:11:59.183 14:13:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.183 [2024-12-04 14:13:00.503916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:59.183 [2024-12-04 14:13:00.504030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68415 ] 00:11:59.444 [2024-12-04 14:13:00.653722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.444 [2024-12-04 14:13:00.827965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.016 Running I/O for 1 seconds... 00:12:00.956 00:12:00.956 Latency(us) 00:12:00.956 [2024-12-04T14:13:02.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.956 [2024-12-04T14:13:02.421Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.956 nvme0n1 : 1.01 12647.68 49.41 0.00 0.00 10110.99 5545.35 23794.61 00:12:00.956 [2024-12-04T14:13:02.421Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.956 nvme1n1 : 1.01 12577.47 49.13 0.00 0.00 10159.56 7007.31 23088.84 00:12:00.956 [2024-12-04T14:13:02.421Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.956 nvme1n2 : 1.01 12562.12 49.07 0.00 0.00 10164.37 7259.37 23996.26 00:12:00.956 [2024-12-04T14:13:02.421Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.956 nvme1n3 : 1.01 12547.84 49.01 0.00 0.00 10164.80 7259.37 25004.50 00:12:00.956 [2024-12-04T14:13:02.421Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.956 nvme2n1 : 1.02 13822.06 53.99 0.00 0.00 9221.74 4688.34 20769.87 00:12:00.956 [2024-12-04T14:13:02.421Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.956 nvme3n1 : 1.02 12579.90 49.14 0.00 0.00 10051.90 5772.21 24601.21 00:12:00.956 [2024-12-04T14:13:02.421Z] =================================================================================================================== 00:12:00.956 [2024-12-04T14:13:02.421Z] Total : 76737.08 299.75 0.00 0.00 9965.82 4688.34 25004.50 00:12:01.894 00:12:01.894 real 0m2.563s 00:12:01.894 user 0m1.969s 00:12:01.894 sys 0m0.424s 00:12:01.894 14:13:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:01.894 ************************************ 00:12:01.894 14:13:03 -- common/autotest_common.sh@10 -- # set +x 00:12:01.894 END TEST bdev_write_zeroes 00:12:01.894 ************************************ 00:12:01.894 14:13:03 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:01.894 14:13:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:01.894 14:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.894 14:13:03 -- common/autotest_common.sh@10 -- # set +x 00:12:01.894 ************************************ 00:12:01.894 START TEST bdev_json_nonenclosed 00:12:01.894 ************************************ 00:12:01.894 14:13:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:01.894 [2024-12-04 14:13:03.130636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:01.894 [2024-12-04 14:13:03.130745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68468 ] 00:12:01.894 [2024-12-04 14:13:03.279307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.155 [2024-12-04 14:13:03.455949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.155 [2024-12-04 14:13:03.456108] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:02.155 [2024-12-04 14:13:03.456125] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.417 00:12:02.417 real 0m0.665s 00:12:02.417 user 0m0.457s 00:12:02.417 sys 0m0.103s 00:12:02.417 ************************************ 00:12:02.417 END TEST bdev_json_nonenclosed 00:12:02.417 ************************************ 00:12:02.417 14:13:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:02.417 14:13:03 -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 14:13:03 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:02.417 14:13:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:02.417 14:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.417 14:13:03 -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 ************************************ 00:12:02.417 START TEST bdev_json_nonarray 00:12:02.417 ************************************ 00:12:02.417 14:13:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:02.417 [2024-12-04 14:13:03.849998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:02.417 [2024-12-04 14:13:03.850120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68498 ] 00:12:02.677 [2024-12-04 14:13:03.999136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.938 [2024-12-04 14:13:04.174349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.938 [2024-12-04 14:13:04.174498] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:02.938 [2024-12-04 14:13:04.174522] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.234 00:12:03.234 real 0m0.665s 00:12:03.234 user 0m0.468s 00:12:03.234 sys 0m0.092s 00:12:03.234 14:13:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:03.234 14:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:03.234 ************************************ 00:12:03.234 END TEST bdev_json_nonarray 00:12:03.234 ************************************ 00:12:03.234 14:13:04 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:03.234 14:13:04 -- bdev/blockdev.sh@809 -- # cleanup 00:12:03.234 14:13:04 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:03.234 14:13:04 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:03.234 14:13:04 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:12:03.234 14:13:04 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:04.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.313 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.313 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.886 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.886 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:12:13.148 00:12:13.148 real 1m1.401s 00:12:13.148 user 1m21.340s 00:12:13.148 sys 0m37.759s 00:12:13.148 14:13:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:13.148 ************************************ 00:12:13.148 END TEST blockdev_xnvme 00:12:13.148 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:13.148 ************************************ 00:12:13.148 14:13:14 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:13.148 14:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:13.148 14:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.148 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:13.148 ************************************ 00:12:13.148 START TEST ublk 00:12:13.148 ************************************ 00:12:13.148 14:13:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:13.148 * Looking for test storage... 00:12:13.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:13.148 14:13:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:13.148 14:13:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:13.148 14:13:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:13.148 14:13:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:13.148 14:13:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:13.148 14:13:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:13.148 14:13:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:13.148 14:13:14 -- scripts/common.sh@335 -- # IFS=.-: 00:12:13.148 14:13:14 -- scripts/common.sh@335 -- # read -ra ver1 00:12:13.148 14:13:14 -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.148 14:13:14 -- scripts/common.sh@336 -- # read -ra ver2 00:12:13.148 14:13:14 -- scripts/common.sh@337 -- # local 'op=<' 00:12:13.148 14:13:14 -- scripts/common.sh@339 -- # ver1_l=2 00:12:13.149 14:13:14 -- scripts/common.sh@340 -- # ver2_l=1 00:12:13.149 14:13:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:13.149 14:13:14 -- scripts/common.sh@343 -- # case "$op" in 00:12:13.149 14:13:14 -- scripts/common.sh@344 -- # : 1 00:12:13.149 14:13:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:13.149 14:13:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.149 14:13:14 -- scripts/common.sh@364 -- # decimal 1 00:12:13.149 14:13:14 -- scripts/common.sh@352 -- # local d=1 00:12:13.149 14:13:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.149 14:13:14 -- scripts/common.sh@354 -- # echo 1 00:12:13.149 14:13:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:13.149 14:13:14 -- scripts/common.sh@365 -- # decimal 2 00:12:13.149 14:13:14 -- scripts/common.sh@352 -- # local d=2 00:12:13.149 14:13:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.149 14:13:14 -- scripts/common.sh@354 -- # echo 2 00:12:13.149 14:13:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:13.149 14:13:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:13.149 14:13:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:13.149 14:13:14 -- scripts/common.sh@367 -- # return 0 00:12:13.149 14:13:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.149 14:13:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:13.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.149 --rc genhtml_branch_coverage=1 00:12:13.149 --rc genhtml_function_coverage=1 00:12:13.149 --rc genhtml_legend=1 00:12:13.149 --rc geninfo_all_blocks=1 00:12:13.149 --rc geninfo_unexecuted_blocks=1 00:12:13.149 00:12:13.149 ' 00:12:13.149 14:13:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:13.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.149 --rc genhtml_branch_coverage=1 00:12:13.149 --rc genhtml_function_coverage=1 00:12:13.149 --rc genhtml_legend=1 00:12:13.149 --rc geninfo_all_blocks=1 00:12:13.149 --rc geninfo_unexecuted_blocks=1 00:12:13.149 00:12:13.149 ' 00:12:13.149 14:13:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:13.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.149 --rc genhtml_branch_coverage=1 00:12:13.149 --rc genhtml_function_coverage=1 00:12:13.149 --rc genhtml_legend=1 00:12:13.149 --rc geninfo_all_blocks=1 00:12:13.149 --rc geninfo_unexecuted_blocks=1 00:12:13.149 00:12:13.149 ' 00:12:13.149 14:13:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:13.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.149 --rc genhtml_branch_coverage=1 00:12:13.149 --rc genhtml_function_coverage=1 00:12:13.149 --rc genhtml_legend=1 00:12:13.149 --rc geninfo_all_blocks=1 00:12:13.149 --rc geninfo_unexecuted_blocks=1 00:12:13.149 00:12:13.149 ' 00:12:13.149 14:13:14 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:13.149 14:13:14 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:13.149 14:13:14 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:13.149 14:13:14 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:13.149 14:13:14 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:13.149 14:13:14 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:13.149 14:13:14 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:13.149 14:13:14 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:13.149 14:13:14 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:13.149 14:13:14 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:13.149 14:13:14 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:13.149 14:13:14 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:13.149 14:13:14 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:13.149 14:13:14 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:13.149 14:13:14 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:13.149 14:13:14 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:13.149 14:13:14 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:13.149 14:13:14 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:13.149 14:13:14 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:13.149 14:13:14 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:13.149 14:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:13.149 14:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.149 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:13.149 ************************************ 00:12:13.149 START TEST test_save_ublk_config 00:12:13.149 ************************************ 00:12:13.149 14:13:14 -- common/autotest_common.sh@1114 -- # test_save_config 00:12:13.149 14:13:14 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:13.149 14:13:14 -- ublk/ublk.sh@103 -- # tgtpid=68801 00:12:13.149 14:13:14 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:13.149 14:13:14 -- ublk/ublk.sh@106 -- # waitforlisten 68801 00:12:13.149 14:13:14 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:13.149 14:13:14 -- common/autotest_common.sh@829 -- # '[' -z 68801 ']' 00:12:13.149 14:13:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.149 14:13:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.149 14:13:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.149 14:13:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.411 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:12:13.411 [2024-12-04 14:13:14.683398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:13.411 [2024-12-04 14:13:14.683503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68801 ] 00:12:13.411 [2024-12-04 14:13:14.832233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.674 [2024-12-04 14:13:15.023144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:13.674 [2024-12-04 14:13:15.023341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.058 14:13:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.058 14:13:16 -- common/autotest_common.sh@862 -- # return 0 00:12:15.058 14:13:16 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:15.058 14:13:16 -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:15.058 14:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.058 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.058 [2024-12-04 14:13:16.178871] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:15.058 malloc0 00:12:15.058 [2024-12-04 14:13:16.242206] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:15.058 [2024-12-04 14:13:16.242278] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:15.058 [2024-12-04 14:13:16.242286] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:15.058 [2024-12-04 14:13:16.242294] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:15.058 [2024-12-04 14:13:16.250226] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:15.058 [2024-12-04 14:13:16.250248] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:15.058 [2024-12-04 14:13:16.258109] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:15.058 [2024-12-04 14:13:16.258198] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:15.058 [2024-12-04 14:13:16.275110] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:15.058 0 00:12:15.058 14:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.058 14:13:16 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:15.058 14:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.058 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:15.320 14:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.320 14:13:16 -- ublk/ublk.sh@115 -- # config='{ 00:12:15.320 "subsystems": [ 00:12:15.320 { 00:12:15.320 "subsystem": "iobuf", 00:12:15.320 "config": [ 00:12:15.320 { 00:12:15.320 "method": "iobuf_set_options", 00:12:15.320 "params": { 00:12:15.320 "small_pool_count": 8192, 00:12:15.320 "large_pool_count": 1024, 00:12:15.320 "small_bufsize": 8192, 00:12:15.320 "large_bufsize": 135168 00:12:15.320 } 00:12:15.320 } 00:12:15.320 ] 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "subsystem": "sock", 00:12:15.320 "config": [ 00:12:15.320 { 00:12:15.320 "method": "sock_impl_set_options", 00:12:15.320 "params": { 00:12:15.320 "impl_name": "posix", 00:12:15.320 "recv_buf_size": 2097152, 00:12:15.320 "send_buf_size": 2097152, 00:12:15.320 "enable_recv_pipe": true, 00:12:15.320 "enable_quickack": false, 00:12:15.320 "enable_placement_id": 0, 00:12:15.320 "enable_zerocopy_send_server": true, 00:12:15.320 "enable_zerocopy_send_client": false, 00:12:15.320 "zerocopy_threshold": 0, 00:12:15.320 "tls_version": 0, 00:12:15.320 "enable_ktls": false 00:12:15.320 } 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "method": "sock_impl_set_options", 00:12:15.320 "params": { 00:12:15.320 "impl_name": "ssl", 00:12:15.320 "recv_buf_size": 4096, 00:12:15.320 "send_buf_size": 4096, 00:12:15.320 "enable_recv_pipe": true, 00:12:15.320 "enable_quickack": false, 00:12:15.320 "enable_placement_id": 0, 00:12:15.320 "enable_zerocopy_send_server": true, 00:12:15.320 "enable_zerocopy_send_client": false, 00:12:15.320 "zerocopy_threshold": 0, 00:12:15.320 "tls_version": 0, 00:12:15.320 "enable_ktls": false 00:12:15.320 } 00:12:15.320 } 00:12:15.320 ] 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "subsystem": "vmd", 00:12:15.320 "config": [] 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "subsystem": "accel", 00:12:15.320 "config": [ 00:12:15.320 { 00:12:15.320 "method": "accel_set_options", 00:12:15.320 "params": { 00:12:15.320 "small_cache_size": 128, 00:12:15.320 "large_cache_size": 16, 00:12:15.320 "task_count": 2048, 00:12:15.320 "sequence_count": 2048, 00:12:15.320 "buf_count": 2048 00:12:15.320 } 00:12:15.320 } 00:12:15.320 ] 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "subsystem": "bdev", 00:12:15.320 "config": [ 00:12:15.320 { 00:12:15.320 "method": "bdev_set_options", 00:12:15.320 "params": { 00:12:15.320 "bdev_io_pool_size": 65535, 00:12:15.320 "bdev_io_cache_size": 256, 00:12:15.320 "bdev_auto_examine": true, 00:12:15.320 "iobuf_small_cache_size": 128, 00:12:15.320 "iobuf_large_cache_size": 16 00:12:15.320 } 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "method": "bdev_raid_set_options", 00:12:15.320 "params": { 00:12:15.320 "process_window_size_kb": 1024 00:12:15.320 } 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "method": "bdev_iscsi_set_options", 00:12:15.320 "params": { 00:12:15.320 "timeout_sec": 30 00:12:15.320 } 00:12:15.320 }, 00:12:15.320 { 00:12:15.320 "method": "bdev_nvme_set_options", 00:12:15.320 "params": { 00:12:15.320 "action_on_timeout": "none", 00:12:15.320 "timeout_us": 0, 00:12:15.320 "timeout_admin_us": 0, 00:12:15.320 "keep_alive_timeout_ms": 10000, 00:12:15.320 "transport_retry_count": 4, 00:12:15.320 "arbitration_burst": 0, 00:12:15.320 "low_priority_weight": 0, 00:12:15.320 "medium_priority_weight": 0, 00:12:15.321 "high_priority_weight": 0, 00:12:15.321 "nvme_adminq_poll_period_us": 10000, 00:12:15.321 "nvme_ioq_poll_period_us": 0, 00:12:15.321 "io_queue_requests": 0, 00:12:15.321 "delay_cmd_submit": true, 00:12:15.321 "bdev_retry_count": 3, 00:12:15.321 "transport_ack_timeout": 0, 00:12:15.321 "ctrlr_loss_timeout_sec": 0, 00:12:15.321 "reconnect_delay_sec": 0, 00:12:15.321 "fast_io_fail_timeout_sec": 0, 00:12:15.321 "generate_uuids": false, 00:12:15.321 "transport_tos": 0, 00:12:15.321 "io_path_stat": false, 00:12:15.321 "allow_accel_sequence": false 00:12:15.321 } 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "method": "bdev_nvme_set_hotplug", 00:12:15.321 "params": { 00:12:15.321 "period_us": 100000, 00:12:15.321 "enable": false 00:12:15.321 } 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "method": "bdev_malloc_create", 00:12:15.321 "params": { 00:12:15.321 "name": "malloc0", 00:12:15.321 "num_blocks": 8192, 00:12:15.321 "block_size": 4096, 00:12:15.321 "physical_block_size": 4096, 00:12:15.321 "uuid": "df2a6c27-6103-4fba-b18b-fece2003dfd7", 00:12:15.321 "optimal_io_boundary": 0 00:12:15.321 } 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "method": "bdev_wait_for_examine" 00:12:15.321 } 00:12:15.321 ] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "scsi", 00:12:15.321 "config": null 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "scheduler", 00:12:15.321 "config": [ 00:12:15.321 { 00:12:15.321 "method": "framework_set_scheduler", 00:12:15.321 "params": { 00:12:15.321 "name": "static" 00:12:15.321 } 00:12:15.321 } 00:12:15.321 ] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "vhost_scsi", 00:12:15.321 "config": [] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "vhost_blk", 00:12:15.321 "config": [] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "ublk", 00:12:15.321 "config": [ 00:12:15.321 { 00:12:15.321 "method": "ublk_create_target", 00:12:15.321 "params": { 00:12:15.321 "cpumask": "1" 00:12:15.321 } 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "method": "ublk_start_disk", 00:12:15.321 "params": { 00:12:15.321 "bdev_name": "malloc0", 00:12:15.321 "ublk_id": 0, 00:12:15.321 "num_queues": 1, 00:12:15.321 "queue_depth": 128 00:12:15.321 } 00:12:15.321 } 00:12:15.321 ] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "nbd", 00:12:15.321 "config": [] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "nvmf", 00:12:15.321 "config": [ 00:12:15.321 { 00:12:15.321 "method": "nvmf_set_config", 00:12:15.321 "params": { 00:12:15.321 "discovery_filter": "match_any", 00:12:15.321 "admin_cmd_passthru": { 00:12:15.321 "identify_ctrlr": false 00:12:15.321 } 00:12:15.321 } 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "method": "nvmf_set_max_subsystems", 00:12:15.321 "params": { 00:12:15.321 "max_subsystems": 1024 00:12:15.321 } 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "method": "nvmf_set_crdt", 00:12:15.321 "params": { 00:12:15.321 "crdt1": 0, 00:12:15.321 "crdt2": 0, 00:12:15.321 "crdt3": 0 00:12:15.321 } 00:12:15.321 } 00:12:15.321 ] 00:12:15.321 }, 00:12:15.321 { 00:12:15.321 "subsystem": "iscsi", 00:12:15.321 "config": [ 00:12:15.321 { 00:12:15.321 "method": "iscsi_set_options", 00:12:15.321 "params": { 00:12:15.321 "node_base": "iqn.2016-06.io.spdk", 00:12:15.321 "max_sessions": 128, 00:12:15.321 "max_connections_per_session": 2, 00:12:15.321 "max_queue_depth": 64, 00:12:15.321 "default_time2wait": 2, 00:12:15.321 "default_time2retain": 20, 00:12:15.321 "first_burst_length": 8192, 00:12:15.321 "immediate_data": true, 00:12:15.321 "allow_duplicated_isid": false, 00:12:15.321 "error_recovery_level": 0, 00:12:15.321 "nop_timeout": 60, 00:12:15.321 "nop_in_interval": 30, 00:12:15.321 "disable_chap": false, 00:12:15.321 "require_chap": false, 00:12:15.321 "mutual_chap": false, 00:12:15.321 "chap_group": 0, 00:12:15.321 "max_large_datain_per_connection": 64, 00:12:15.321 "max_r2t_per_connection": 4, 00:12:15.321 "pdu_pool_size": 36864, 00:12:15.321 "immediate_data_pool_size": 16384, 00:12:15.321 "data_out_pool_size": 2048 00:12:15.321 } 00:12:15.321 } 00:12:15.321 ] 00:12:15.321 } 00:12:15.321 ] 00:12:15.321 }' 00:12:15.321 14:13:16 -- ublk/ublk.sh@116 -- # killprocess 68801 00:12:15.321 14:13:16 -- common/autotest_common.sh@936 -- # '[' -z 68801 ']' 00:12:15.321 14:13:16 -- common/autotest_common.sh@940 -- # kill -0 68801 00:12:15.321 14:13:16 -- common/autotest_common.sh@941 -- # uname 00:12:15.321 14:13:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.321 14:13:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68801 00:12:15.321 killing process with pid 68801 00:12:15.321 14:13:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:15.321 14:13:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:15.321 14:13:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68801' 00:12:15.321 14:13:16 -- common/autotest_common.sh@955 -- # kill 68801 00:12:15.321 14:13:16 -- common/autotest_common.sh@960 -- # wait 68801 00:12:16.255 [2024-12-04 14:13:17.492784] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:16.255 [2024-12-04 14:13:17.522172] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:16.255 [2024-12-04 14:13:17.522273] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:16.255 [2024-12-04 14:13:17.529160] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:16.255 [2024-12-04 14:13:17.529204] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:16.255 [2024-12-04 14:13:17.529213] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:16.255 [2024-12-04 14:13:17.529237] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:16.255 [2024-12-04 14:13:17.529342] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:17.631 14:13:18 -- ublk/ublk.sh@119 -- # tgtpid=68864 00:12:17.631 14:13:18 -- ublk/ublk.sh@121 -- # waitforlisten 68864 00:12:17.631 14:13:18 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:17.631 14:13:18 -- common/autotest_common.sh@829 -- # '[' -z 68864 ']' 00:12:17.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.631 14:13:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.631 14:13:18 -- ublk/ublk.sh@118 -- # echo '{ 00:12:17.631 "subsystems": [ 00:12:17.631 { 00:12:17.631 "subsystem": "iobuf", 00:12:17.631 "config": [ 00:12:17.631 { 00:12:17.631 "method": "iobuf_set_options", 00:12:17.631 "params": { 00:12:17.631 "small_pool_count": 8192, 00:12:17.631 "large_pool_count": 1024, 00:12:17.631 "small_bufsize": 8192, 00:12:17.631 "large_bufsize": 135168 00:12:17.631 } 00:12:17.631 } 00:12:17.631 ] 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "subsystem": "sock", 00:12:17.631 "config": [ 00:12:17.631 { 00:12:17.631 "method": "sock_impl_set_options", 00:12:17.631 "params": { 00:12:17.631 "impl_name": "posix", 00:12:17.631 "recv_buf_size": 2097152, 00:12:17.631 "send_buf_size": 2097152, 00:12:17.631 "enable_recv_pipe": true, 00:12:17.631 "enable_quickack": false, 00:12:17.631 "enable_placement_id": 0, 00:12:17.631 "enable_zerocopy_send_server": true, 00:12:17.631 "enable_zerocopy_send_client": false, 00:12:17.631 "zerocopy_threshold": 0, 00:12:17.631 "tls_version": 0, 00:12:17.631 "enable_ktls": false 00:12:17.631 } 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "method": "sock_impl_set_options", 00:12:17.631 "params": { 00:12:17.631 "impl_name": "ssl", 00:12:17.631 "recv_buf_size": 4096, 00:12:17.631 "send_buf_size": 4096, 00:12:17.631 "enable_recv_pipe": true, 00:12:17.631 "enable_quickack": false, 00:12:17.631 "enable_placement_id": 0, 00:12:17.631 "enable_zerocopy_send_server": true, 00:12:17.631 "enable_zerocopy_send_client": false, 00:12:17.631 "zerocopy_threshold": 0, 00:12:17.631 "tls_version": 0, 00:12:17.631 "enable_ktls": false 00:12:17.631 } 00:12:17.631 } 00:12:17.631 ] 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "subsystem": "vmd", 00:12:17.631 "config": [] 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "subsystem": "accel", 00:12:17.631 "config": [ 00:12:17.631 { 00:12:17.631 "method": "accel_set_options", 00:12:17.631 "params": { 00:12:17.631 "small_cache_size": 128, 00:12:17.631 "large_cache_size": 16, 00:12:17.631 "task_count": 2048, 00:12:17.631 "sequence_count": 2048, 00:12:17.631 "buf_count": 2048 00:12:17.631 } 00:12:17.631 } 00:12:17.631 ] 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "subsystem": "bdev", 00:12:17.631 "config": [ 00:12:17.631 { 00:12:17.631 "method": "bdev_set_options", 00:12:17.631 "params": { 00:12:17.631 "bdev_io_pool_size": 65535, 00:12:17.631 "bdev_io_cache_size": 256, 00:12:17.631 "bdev_auto_examine": true, 00:12:17.631 "iobuf_small_cache_size": 128, 00:12:17.631 "iobuf_large_cache_size": 16 00:12:17.631 } 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "method": "bdev_raid_set_options", 00:12:17.631 "params": { 00:12:17.631 "process_window_size_kb": 1024 00:12:17.631 } 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "method": "bdev_iscsi_set_options", 00:12:17.631 "params": { 00:12:17.631 "timeout_sec": 30 00:12:17.631 } 00:12:17.631 }, 00:12:17.631 { 00:12:17.631 "method": "bdev_nvme_set_options", 00:12:17.631 "params": { 00:12:17.631 "action_on_timeout": "none", 00:12:17.631 "timeout_us": 0, 00:12:17.631 "timeout_admin_us": 0, 00:12:17.631 "keep_alive_timeout_ms": 10000, 00:12:17.631 "transport_retry_count": 4, 00:12:17.631 "arbitration_burst": 0, 00:12:17.631 "low_priority_weight": 0, 00:12:17.631 "medium_priority_weight": 0, 00:12:17.631 "high_priority_weight": 0, 00:12:17.631 "nvme_adminq_poll_period_us": 10000, 00:12:17.631 "nvme_ioq_poll_period_us": 0, 00:12:17.631 "io_queue_requests": 0, 00:12:17.631 "delay_cmd_submit": true, 00:12:17.631 "bdev_retry_count": 3, 00:12:17.631 "transport_ack_timeout": 0, 00:12:17.631 "ctrlr_loss_timeout_sec": 0, 00:12:17.631 "reconnect_delay_sec": 0, 00:12:17.631 "fast_io_fail_timeout_sec": 0, 00:12:17.631 "generate_uuids": false, 00:12:17.631 "transport_tos": 0, 00:12:17.631 "io_path_stat": false, 00:12:17.631 "allow_accel_sequence": false 00:12:17.631 } 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "method": "bdev_nvme_set_hotplug", 00:12:17.632 "params": { 00:12:17.632 "period_us": 100000, 00:12:17.632 "enable": false 00:12:17.632 } 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "method": "bdev_malloc_create", 00:12:17.632 "params": { 00:12:17.632 "name": "malloc0", 00:12:17.632 "num_blocks": 8192, 00:12:17.632 "block_size": 4096, 00:12:17.632 "physical_block_size": 4096, 00:12:17.632 "uuid": "df2a6c27-6103-4fba-b18b-fece2003dfd7", 00:12:17.632 "optimal_io_boundary": 0 00:12:17.632 } 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "method": "bdev_wait_for_examine" 00:12:17.632 } 00:12:17.632 ] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "scsi", 00:12:17.632 "config": null 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "scheduler", 00:12:17.632 "config": [ 00:12:17.632 { 00:12:17.632 "method": "framework_set_scheduler", 00:12:17.632 "params": { 00:12:17.632 "name": "static" 00:12:17.632 } 00:12:17.632 } 00:12:17.632 ] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "vhost_scsi", 00:12:17.632 "config": [] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "vhost_blk", 00:12:17.632 "config": [] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "ublk", 00:12:17.632 "config": [ 00:12:17.632 { 00:12:17.632 "method": "ublk_create_target", 00:12:17.632 "params": { 00:12:17.632 "cpumask": "1" 00:12:17.632 } 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "method": "ublk_start_disk", 00:12:17.632 "params": { 00:12:17.632 "bdev_name": "malloc0", 00:12:17.632 "ublk_id": 0, 00:12:17.632 "num_queues": 1, 00:12:17.632 "queue_depth": 128 00:12:17.632 } 00:12:17.632 } 00:12:17.632 ] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "nbd", 00:12:17.632 "config": [] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "nvmf", 00:12:17.632 "config": [ 00:12:17.632 { 00:12:17.632 "method": "nvmf_set_config", 00:12:17.632 "params": { 00:12:17.632 "discovery_filter": "match_any", 00:12:17.632 "admin_cmd_passthru": { 00:12:17.632 "identify_ctrlr": false 00:12:17.632 } 00:12:17.632 } 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "method": "nvmf_set_max_subsystems", 00:12:17.632 "params": { 00:12:17.632 "max_subsystems": 1024 00:12:17.632 } 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "method": "nvmf_set_crdt", 00:12:17.632 "params": { 00:12:17.632 "crdt1": 0, 00:12:17.632 "crdt2": 0, 00:12:17.632 "crdt3": 0 00:12:17.632 } 00:12:17.632 } 00:12:17.632 ] 00:12:17.632 }, 00:12:17.632 { 00:12:17.632 "subsystem": "iscsi", 00:12:17.632 "config": [ 00:12:17.632 { 00:12:17.632 "method": "iscsi_set_options", 00:12:17.632 "params": { 00:12:17.632 "node_base": "iqn.2016-06.io.spdk", 00:12:17.632 "max_sessions": 128, 00:12:17.632 "max_connections_per_session": 2, 00:12:17.632 "max_queue_depth": 64, 00:12:17.632 "default_time2wait": 2, 00:12:17.632 "default_time2retain": 20, 00:12:17.632 "first_burst_length": 8192, 00:12:17.632 "immediate_data": true, 00:12:17.632 "allow_duplicated_isid": false, 00:12:17.632 "error_recovery_level": 0, 00:12:17.632 "nop_timeout": 60, 00:12:17.632 "nop_in_interval": 30, 00:12:17.632 "disable_chap": false, 00:12:17.632 "require_chap": false, 00:12:17.632 "mutual_chap": false, 00:12:17.632 "chap_group": 0, 00:12:17.632 "max_large_datain_per_connection": 64, 00:12:17.632 "max_r2t_per_connection": 4, 00:12:17.632 "pdu_pool_size": 36864, 00:12:17.632 "immediate_data_pool_size": 16384, 00:12:17.632 "data_out_pool_size": 2048 00:12:17.632 } 00:12:17.632 } 00:12:17.632 ] 00:12:17.632 } 00:12:17.632 ] 00:12:17.632 }' 00:12:17.632 14:13:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.632 14:13:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.632 14:13:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.632 14:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:17.632 [2024-12-04 14:13:18.776936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:17.632 [2024-12-04 14:13:18.777048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68864 ] 00:12:17.632 [2024-12-04 14:13:18.921674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.632 [2024-12-04 14:13:19.083511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.632 [2024-12-04 14:13:19.083659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.567 [2024-12-04 14:13:19.668687] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:18.567 [2024-12-04 14:13:19.676186] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:18.567 [2024-12-04 14:13:19.676241] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:18.567 [2024-12-04 14:13:19.676248] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:18.567 [2024-12-04 14:13:19.676253] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:18.567 [2024-12-04 14:13:19.684199] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:18.567 [2024-12-04 14:13:19.684218] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:18.567 [2024-12-04 14:13:19.692112] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:18.567 [2024-12-04 14:13:19.692182] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:18.567 [2024-12-04 14:13:19.709101] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:18.825 14:13:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.825 14:13:20 -- common/autotest_common.sh@862 -- # return 0 00:12:18.825 14:13:20 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:18.825 14:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.825 14:13:20 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:18.825 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:19.083 14:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.083 14:13:20 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:19.083 14:13:20 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:19.083 14:13:20 -- ublk/ublk.sh@125 -- # killprocess 68864 00:12:19.083 14:13:20 -- common/autotest_common.sh@936 -- # '[' -z 68864 ']' 00:12:19.083 14:13:20 -- common/autotest_common.sh@940 -- # kill -0 68864 00:12:19.083 14:13:20 -- common/autotest_common.sh@941 -- # uname 00:12:19.083 14:13:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.083 14:13:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68864 00:12:19.083 killing process with pid 68864 00:12:19.083 14:13:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.083 14:13:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.083 14:13:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68864' 00:12:19.083 14:13:20 -- common/autotest_common.sh@955 -- # kill 68864 00:12:19.083 14:13:20 -- common/autotest_common.sh@960 -- # wait 68864 00:12:19.675 [2024-12-04 14:13:21.056285] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:19.675 [2024-12-04 14:13:21.086172] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:19.675 [2024-12-04 14:13:21.086267] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:19.675 [2024-12-04 14:13:21.091109] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:19.675 [2024-12-04 14:13:21.091147] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:19.675 [2024-12-04 14:13:21.091152] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:19.675 [2024-12-04 14:13:21.091170] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:19.675 [2024-12-04 14:13:21.091276] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:21.057 14:13:22 -- ublk/ublk.sh@126 -- # trap - EXIT 00:12:21.057 00:12:21.057 real 0m7.654s 00:12:21.057 user 0m5.828s 00:12:21.057 sys 0m2.718s 00:12:21.057 ************************************ 00:12:21.057 END TEST test_save_ublk_config 00:12:21.057 ************************************ 00:12:21.057 14:13:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:21.057 14:13:22 -- common/autotest_common.sh@10 -- # set +x 00:12:21.057 14:13:22 -- ublk/ublk.sh@139 -- # spdk_pid=68938 00:12:21.057 14:13:22 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:21.057 14:13:22 -- ublk/ublk.sh@141 -- # waitforlisten 68938 00:12:21.057 14:13:22 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:21.057 14:13:22 -- common/autotest_common.sh@829 -- # '[' -z 68938 ']' 00:12:21.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.058 14:13:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.058 14:13:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.058 14:13:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.058 14:13:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.058 14:13:22 -- common/autotest_common.sh@10 -- # set +x 00:12:21.058 [2024-12-04 14:13:22.370011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.058 [2024-12-04 14:13:22.370128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68938 ] 00:12:21.058 [2024-12-04 14:13:22.515624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:21.317 [2024-12-04 14:13:22.653553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:21.317 [2024-12-04 14:13:22.654234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.317 [2024-12-04 14:13:22.654321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.892 14:13:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.892 14:13:23 -- common/autotest_common.sh@862 -- # return 0 00:12:21.892 14:13:23 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:12:21.892 14:13:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:21.892 14:13:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.892 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.892 ************************************ 00:12:21.892 START TEST test_create_ublk 00:12:21.892 ************************************ 00:12:21.892 14:13:23 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:12:21.892 14:13:23 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:12:21.892 14:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.892 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.892 [2024-12-04 14:13:23.203546] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:21.892 14:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.892 14:13:23 -- ublk/ublk.sh@33 -- # ublk_target= 00:12:21.892 14:13:23 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:12:21.892 14:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.892 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:21.892 14:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.892 14:13:23 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:12:21.892 14:13:23 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:21.892 14:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.892 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.153 [2024-12-04 14:13:23.362201] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:22.153 [2024-12-04 14:13:23.362495] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:22.153 [2024-12-04 14:13:23.362507] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:22.153 [2024-12-04 14:13:23.362514] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:22.153 [2024-12-04 14:13:23.370300] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:22.153 [2024-12-04 14:13:23.370320] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:22.153 [2024-12-04 14:13:23.378107] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:22.153 [2024-12-04 14:13:23.390253] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:22.153 [2024-12-04 14:13:23.402105] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:22.153 14:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@37 -- # ublk_id=0 00:12:22.153 14:13:23 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:12:22.153 14:13:23 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:12:22.153 14:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.153 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:12:22.153 14:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:12:22.153 { 00:12:22.153 "ublk_device": "/dev/ublkb0", 00:12:22.153 "id": 0, 00:12:22.153 "queue_depth": 512, 00:12:22.153 "num_queues": 4, 00:12:22.153 "bdev_name": "Malloc0" 00:12:22.153 } 00:12:22.153 ]' 00:12:22.153 14:13:23 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:12:22.153 14:13:23 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:12:22.153 14:13:23 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:12:22.153 14:13:23 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:12:22.153 14:13:23 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:12:22.153 14:13:23 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:22.153 14:13:23 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:12:22.153 14:13:23 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:12:22.153 14:13:23 -- lvol/common.sh@41 -- # local offset=0 00:12:22.153 14:13:23 -- lvol/common.sh@42 -- # local size=134217728 00:12:22.153 14:13:23 -- lvol/common.sh@43 -- # local rw=write 00:12:22.153 14:13:23 -- lvol/common.sh@44 -- # local pattern=0xcc 00:12:22.153 14:13:23 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:12:22.153 14:13:23 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:22.153 14:13:23 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:12:22.153 14:13:23 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:22.153 14:13:23 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:22.153 14:13:23 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:12:22.412 fio: verification read phase will never start because write phase uses all of runtime 00:12:22.412 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:22.412 fio-3.35 00:12:22.412 Starting 1 process 00:12:32.388 00:12:32.388 fio_test: (groupid=0, jobs=1): err= 0: pid=68978: Wed Dec 4 14:13:33 2024 00:12:32.388 write: IOPS=20.9k, BW=81.6MiB/s (85.6MB/s)(817MiB/10001msec); 0 zone resets 00:12:32.388 clat (usec): min=31, max=4105, avg=47.11, stdev=80.35 00:12:32.388 lat (usec): min=32, max=4106, avg=47.53, stdev=80.36 00:12:32.388 clat percentiles (usec): 00:12:32.388 | 1.00th=[ 36], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 41], 00:12:32.388 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 45], 00:12:32.388 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 57], 00:12:32.388 | 99.00th=[ 66], 99.50th=[ 71], 99.90th=[ 1090], 99.95th=[ 2474], 00:12:32.388 | 99.99th=[ 3458] 00:12:32.388 bw ( KiB/s): min=78656, max=89384, per=99.74%, avg=83389.47, stdev=3162.34, samples=19 00:12:32.388 iops : min=19662, max=22346, avg=20847.37, stdev=790.59, samples=19 00:12:32.388 lat (usec) : 50=90.01%, 100=9.75%, 250=0.10%, 500=0.02%, 750=0.01% 00:12:32.388 lat (usec) : 1000=0.01% 00:12:32.388 lat (msec) : 2=0.03%, 4=0.07%, 10=0.01% 00:12:32.388 cpu : usr=3.75%, sys=15.86%, ctx=209062, majf=0, minf=796 00:12:32.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.388 issued rwts: total=0,209045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.388 00:12:32.388 Run status group 0 (all jobs): 00:12:32.388 WRITE: bw=81.6MiB/s (85.6MB/s), 81.6MiB/s-81.6MiB/s (85.6MB/s-85.6MB/s), io=817MiB (856MB), run=10001-10001msec 00:12:32.388 00:12:32.388 Disk stats (read/write): 00:12:32.388 ublkb0: ios=0/206720, merge=0/0, ticks=0/8069, in_queue=8070, util=99.09% 00:12:32.388 14:13:33 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:12:32.388 14:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.388 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:32.388 [2024-12-04 14:13:33.813460] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:32.646 [2024-12-04 14:13:33.853530] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:32.646 [2024-12-04 14:13:33.855053] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:32.646 [2024-12-04 14:13:33.870102] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:32.646 [2024-12-04 14:13:33.870345] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:32.646 [2024-12-04 14:13:33.870355] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:32.646 14:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.646 14:13:33 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:12:32.646 14:13:33 -- common/autotest_common.sh@650 -- # local es=0 00:12:32.646 14:13:33 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:12:32.646 14:13:33 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:32.646 14:13:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.646 14:13:33 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:32.646 14:13:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.646 14:13:33 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:12:32.646 14:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.647 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:32.647 [2024-12-04 14:13:33.879189] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:12:32.647 request: 00:12:32.647 { 00:12:32.647 "ublk_id": 0, 00:12:32.647 "method": "ublk_stop_disk", 00:12:32.647 "req_id": 1 00:12:32.647 } 00:12:32.647 Got JSON-RPC error response 00:12:32.647 response: 00:12:32.647 { 00:12:32.647 "code": -19, 00:12:32.647 "message": "No such device" 00:12:32.647 } 00:12:32.647 14:13:33 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:32.647 14:13:33 -- common/autotest_common.sh@653 -- # es=1 00:12:32.647 14:13:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.647 14:13:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.647 14:13:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.647 14:13:33 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:12:32.647 14:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.647 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:32.647 [2024-12-04 14:13:33.894143] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:32.647 [2024-12-04 14:13:33.897708] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:32.647 [2024-12-04 14:13:33.897734] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:32.647 14:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.647 14:13:33 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:32.647 14:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.647 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:32.904 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.904 14:13:34 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:12:32.904 14:13:34 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:32.904 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.904 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:32.904 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.904 14:13:34 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:32.904 14:13:34 -- lvol/common.sh@26 -- # jq length 00:12:32.904 14:13:34 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:32.904 14:13:34 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:32.904 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.904 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:32.904 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.904 14:13:34 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:32.904 14:13:34 -- lvol/common.sh@28 -- # jq length 00:12:32.904 ************************************ 00:12:32.904 END TEST test_create_ublk 00:12:32.904 ************************************ 00:12:32.904 14:13:34 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:32.904 00:12:32.904 real 0m11.150s 00:12:32.904 user 0m0.668s 00:12:32.904 sys 0m1.660s 00:12:32.904 14:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.904 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.161 14:13:34 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:12:33.161 14:13:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:33.161 14:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.161 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.161 ************************************ 00:12:33.161 START TEST test_create_multi_ublk 00:12:33.161 ************************************ 00:12:33.161 14:13:34 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:12:33.161 14:13:34 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:12:33.161 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.161 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.161 [2024-12-04 14:13:34.385563] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:33.161 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.161 14:13:34 -- ublk/ublk.sh@62 -- # ublk_target= 00:12:33.161 14:13:34 -- ublk/ublk.sh@64 -- # seq 0 3 00:12:33.161 14:13:34 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:33.161 14:13:34 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:12:33.161 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.161 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.161 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.161 14:13:34 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:12:33.161 14:13:34 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:33.161 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.161 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.162 [2024-12-04 14:13:34.612203] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:33.162 [2024-12-04 14:13:34.612513] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:33.162 [2024-12-04 14:13:34.612524] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:33.162 [2024-12-04 14:13:34.612531] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:33.162 [2024-12-04 14:13:34.624298] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:33.162 [2024-12-04 14:13:34.624318] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:33.419 [2024-12-04 14:13:34.636107] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:33.419 [2024-12-04 14:13:34.636591] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:33.419 [2024-12-04 14:13:34.684109] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:33.419 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.419 14:13:34 -- ublk/ublk.sh@68 -- # ublk_id=0 00:12:33.419 14:13:34 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:33.419 14:13:34 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:12:33.419 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.419 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.678 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.678 14:13:34 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:12:33.678 14:13:34 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:12:33.678 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.678 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.678 [2024-12-04 14:13:34.912183] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:12:33.678 [2024-12-04 14:13:34.912473] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:12:33.678 [2024-12-04 14:13:34.912486] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:33.678 [2024-12-04 14:13:34.912491] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:33.678 [2024-12-04 14:13:34.920120] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:33.678 [2024-12-04 14:13:34.920137] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:33.678 [2024-12-04 14:13:34.928110] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:33.678 [2024-12-04 14:13:34.928588] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:33.678 [2024-12-04 14:13:34.944116] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:33.678 14:13:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.678 14:13:34 -- ublk/ublk.sh@68 -- # ublk_id=1 00:12:33.678 14:13:34 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:33.678 14:13:34 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:12:33.678 14:13:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.678 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:12:33.678 14:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.678 14:13:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:12:33.678 14:13:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:12:33.678 14:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.678 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.678 [2024-12-04 14:13:35.104211] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:12:33.678 [2024-12-04 14:13:35.104493] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:12:33.678 [2024-12-04 14:13:35.104504] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:12:33.678 [2024-12-04 14:13:35.104511] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:12:33.678 [2024-12-04 14:13:35.112110] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:33.678 [2024-12-04 14:13:35.112129] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:33.678 [2024-12-04 14:13:35.120107] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:33.678 [2024-12-04 14:13:35.120588] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:12:33.678 [2024-12-04 14:13:35.124758] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:12:33.678 14:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.678 14:13:35 -- ublk/ublk.sh@68 -- # ublk_id=2 00:12:33.678 14:13:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:33.678 14:13:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:12:33.678 14:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.678 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.936 14:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.936 14:13:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:12:33.936 14:13:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:12:33.936 14:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.936 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.936 [2024-12-04 14:13:35.284196] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:12:33.936 [2024-12-04 14:13:35.284481] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:12:33.936 [2024-12-04 14:13:35.284494] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:12:33.936 [2024-12-04 14:13:35.284499] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:12:33.936 [2024-12-04 14:13:35.292119] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:33.936 [2024-12-04 14:13:35.292136] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:33.936 [2024-12-04 14:13:35.300109] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:33.936 [2024-12-04 14:13:35.300588] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:12:33.936 [2024-12-04 14:13:35.304786] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:12:33.936 14:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.936 14:13:35 -- ublk/ublk.sh@68 -- # ublk_id=3 00:12:33.936 14:13:35 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:12:33.936 14:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.936 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:33.936 14:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.936 14:13:35 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:12:33.936 { 00:12:33.936 "ublk_device": "/dev/ublkb0", 00:12:33.936 "id": 0, 00:12:33.936 "queue_depth": 512, 00:12:33.936 "num_queues": 4, 00:12:33.936 "bdev_name": "Malloc0" 00:12:33.936 }, 00:12:33.936 { 00:12:33.936 "ublk_device": "/dev/ublkb1", 00:12:33.936 "id": 1, 00:12:33.936 "queue_depth": 512, 00:12:33.936 "num_queues": 4, 00:12:33.936 "bdev_name": "Malloc1" 00:12:33.936 }, 00:12:33.936 { 00:12:33.936 "ublk_device": "/dev/ublkb2", 00:12:33.936 "id": 2, 00:12:33.936 "queue_depth": 512, 00:12:33.936 "num_queues": 4, 00:12:33.936 "bdev_name": "Malloc2" 00:12:33.936 }, 00:12:33.936 { 00:12:33.936 "ublk_device": "/dev/ublkb3", 00:12:33.936 "id": 3, 00:12:33.936 "queue_depth": 512, 00:12:33.936 "num_queues": 4, 00:12:33.936 "bdev_name": "Malloc3" 00:12:33.936 } 00:12:33.936 ]' 00:12:33.936 14:13:35 -- ublk/ublk.sh@72 -- # seq 0 3 00:12:33.936 14:13:35 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:33.936 14:13:35 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:12:33.936 14:13:35 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:33.936 14:13:35 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:12:33.936 14:13:35 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:12:33.936 14:13:35 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:12:34.193 14:13:35 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:12:34.193 14:13:35 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:12:34.193 14:13:35 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.193 14:13:35 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:12:34.193 14:13:35 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:12:34.193 14:13:35 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:12:34.193 14:13:35 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:12:34.193 14:13:35 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:12:34.193 14:13:35 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:12:34.193 14:13:35 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.193 14:13:35 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:12:34.455 14:13:35 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:12:34.455 14:13:35 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:12:34.455 14:13:35 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:12:34.455 14:13:35 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:12:34.455 14:13:35 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.455 14:13:35 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:12:34.455 14:13:35 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:12:34.455 14:13:35 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:12:34.455 14:13:35 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:12:34.455 14:13:35 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:34.455 14:13:35 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:12:34.718 14:13:35 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:12:34.718 14:13:35 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:12:34.718 14:13:35 -- ublk/ublk.sh@85 -- # seq 0 3 00:12:34.718 14:13:35 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.718 14:13:35 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:12:34.718 14:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.718 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:34.718 [2024-12-04 14:13:35.952164] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:34.718 [2024-12-04 14:13:35.991142] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:34.718 [2024-12-04 14:13:35.991782] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:34.719 [2024-12-04 14:13:35.999122] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:34.719 [2024-12-04 14:13:35.999345] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:34.719 [2024-12-04 14:13:35.999359] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:34.719 14:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.719 14:13:36 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.719 14:13:36 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:12:34.719 14:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.719 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.719 [2024-12-04 14:13:36.015162] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:12:34.719 [2024-12-04 14:13:36.055136] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:34.719 [2024-12-04 14:13:36.055748] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:12:34.719 [2024-12-04 14:13:36.063171] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:34.719 [2024-12-04 14:13:36.063391] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:12:34.719 [2024-12-04 14:13:36.063404] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:12:34.719 14:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.719 14:13:36 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.719 14:13:36 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:12:34.719 14:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.719 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.719 [2024-12-04 14:13:36.079150] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:12:34.719 [2024-12-04 14:13:36.111532] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:34.719 [2024-12-04 14:13:36.112545] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:12:34.719 [2024-12-04 14:13:36.119110] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:34.719 [2024-12-04 14:13:36.119329] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:12:34.719 [2024-12-04 14:13:36.119344] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:12:34.719 14:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.719 14:13:36 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.719 14:13:36 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:12:34.719 14:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.719 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:34.719 [2024-12-04 14:13:36.135161] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:12:34.719 [2024-12-04 14:13:36.175102] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:34.719 [2024-12-04 14:13:36.175698] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:12:34.719 [2024-12-04 14:13:36.183166] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:34.719 [2024-12-04 14:13:36.183379] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:12:34.719 [2024-12-04 14:13:36.183390] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:12:34.977 14:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.977 14:13:36 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:12:34.977 [2024-12-04 14:13:36.367169] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:34.977 [2024-12-04 14:13:36.370713] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:34.977 [2024-12-04 14:13:36.370739] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:34.977 14:13:36 -- ublk/ublk.sh@93 -- # seq 0 3 00:12:34.977 14:13:36 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:34.977 14:13:36 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:34.977 14:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.977 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:35.544 14:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.544 14:13:36 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:35.544 14:13:36 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:12:35.544 14:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.544 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:12:35.803 14:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.803 14:13:37 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:35.803 14:13:37 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:35.803 14:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.803 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.062 14:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.062 14:13:37 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:36.062 14:13:37 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:12:36.062 14:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.062 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 14:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.322 14:13:37 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:12:36.322 14:13:37 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:36.322 14:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.322 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 14:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.322 14:13:37 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:36.322 14:13:37 -- lvol/common.sh@26 -- # jq length 00:12:36.322 14:13:37 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:36.322 14:13:37 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:36.322 14:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.322 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.322 14:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.322 14:13:37 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:36.322 14:13:37 -- lvol/common.sh@28 -- # jq length 00:12:36.322 ************************************ 00:12:36.322 END TEST test_create_multi_ublk 00:12:36.322 ************************************ 00:12:36.322 14:13:37 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:36.322 00:12:36.322 real 0m3.397s 00:12:36.322 user 0m0.776s 00:12:36.322 sys 0m0.146s 00:12:36.322 14:13:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.322 14:13:37 -- common/autotest_common.sh@10 -- # set +x 00:12:36.582 14:13:37 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:12:36.582 14:13:37 -- ublk/ublk.sh@147 -- # cleanup 00:12:36.582 14:13:37 -- ublk/ublk.sh@130 -- # killprocess 68938 00:12:36.582 14:13:37 -- common/autotest_common.sh@936 -- # '[' -z 68938 ']' 00:12:36.582 14:13:37 -- common/autotest_common.sh@940 -- # kill -0 68938 00:12:36.582 14:13:37 -- common/autotest_common.sh@941 -- # uname 00:12:36.582 14:13:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.582 14:13:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68938 00:12:36.582 killing process with pid 68938 00:12:36.582 14:13:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:36.582 14:13:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:36.582 14:13:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68938' 00:12:36.582 14:13:37 -- common/autotest_common.sh@955 -- # kill 68938 00:12:36.582 14:13:37 -- common/autotest_common.sh@960 -- # wait 68938 00:12:37.151 [2024-12-04 14:13:38.350422] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:12:37.151 [2024-12-04 14:13:38.350469] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:12:37.721 00:12:37.721 real 0m24.551s 00:12:37.721 user 0m35.440s 00:12:37.721 sys 0m9.528s 00:12:37.721 14:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:37.721 ************************************ 00:12:37.721 END TEST ublk 00:12:37.721 ************************************ 00:12:37.721 14:13:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.721 14:13:39 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:37.721 14:13:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:37.721 14:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.721 14:13:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.721 ************************************ 00:12:37.721 START TEST ublk_recovery 00:12:37.721 ************************************ 00:12:37.721 14:13:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:37.721 * Looking for test storage... 00:12:37.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:37.721 14:13:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:37.721 14:13:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:37.721 14:13:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:37.721 14:13:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:37.721 14:13:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:37.721 14:13:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:37.721 14:13:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:37.721 14:13:39 -- scripts/common.sh@335 -- # IFS=.-: 00:12:37.721 14:13:39 -- scripts/common.sh@335 -- # read -ra ver1 00:12:37.721 14:13:39 -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.721 14:13:39 -- scripts/common.sh@336 -- # read -ra ver2 00:12:37.721 14:13:39 -- scripts/common.sh@337 -- # local 'op=<' 00:12:37.721 14:13:39 -- scripts/common.sh@339 -- # ver1_l=2 00:12:37.721 14:13:39 -- scripts/common.sh@340 -- # ver2_l=1 00:12:37.721 14:13:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:37.721 14:13:39 -- scripts/common.sh@343 -- # case "$op" in 00:12:37.721 14:13:39 -- scripts/common.sh@344 -- # : 1 00:12:37.721 14:13:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:37.721 14:13:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.982 14:13:39 -- scripts/common.sh@364 -- # decimal 1 00:12:37.982 14:13:39 -- scripts/common.sh@352 -- # local d=1 00:12:37.982 14:13:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.982 14:13:39 -- scripts/common.sh@354 -- # echo 1 00:12:37.982 14:13:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:37.982 14:13:39 -- scripts/common.sh@365 -- # decimal 2 00:12:37.982 14:13:39 -- scripts/common.sh@352 -- # local d=2 00:12:37.982 14:13:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.982 14:13:39 -- scripts/common.sh@354 -- # echo 2 00:12:37.982 14:13:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:37.982 14:13:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:37.982 14:13:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:37.982 14:13:39 -- scripts/common.sh@367 -- # return 0 00:12:37.982 14:13:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.982 14:13:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.982 --rc genhtml_branch_coverage=1 00:12:37.982 --rc genhtml_function_coverage=1 00:12:37.982 --rc genhtml_legend=1 00:12:37.982 --rc geninfo_all_blocks=1 00:12:37.982 --rc geninfo_unexecuted_blocks=1 00:12:37.982 00:12:37.982 ' 00:12:37.982 14:13:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.982 --rc genhtml_branch_coverage=1 00:12:37.982 --rc genhtml_function_coverage=1 00:12:37.982 --rc genhtml_legend=1 00:12:37.982 --rc geninfo_all_blocks=1 00:12:37.982 --rc geninfo_unexecuted_blocks=1 00:12:37.982 00:12:37.982 ' 00:12:37.982 14:13:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.982 --rc genhtml_branch_coverage=1 00:12:37.982 --rc genhtml_function_coverage=1 00:12:37.982 --rc genhtml_legend=1 00:12:37.982 --rc geninfo_all_blocks=1 00:12:37.982 --rc geninfo_unexecuted_blocks=1 00:12:37.982 00:12:37.982 ' 00:12:37.982 14:13:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.982 --rc genhtml_branch_coverage=1 00:12:37.982 --rc genhtml_function_coverage=1 00:12:37.982 --rc genhtml_legend=1 00:12:37.982 --rc geninfo_all_blocks=1 00:12:37.982 --rc geninfo_unexecuted_blocks=1 00:12:37.982 00:12:37.982 ' 00:12:37.982 14:13:39 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:37.982 14:13:39 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:37.982 14:13:39 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:37.982 14:13:39 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:37.982 14:13:39 -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:37.982 14:13:39 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:37.982 14:13:39 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:37.982 14:13:39 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:37.982 14:13:39 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:37.982 14:13:39 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:12:37.982 14:13:39 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69329 00:12:37.982 14:13:39 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:37.982 14:13:39 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:37.982 14:13:39 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69329 00:12:37.982 14:13:39 -- common/autotest_common.sh@829 -- # '[' -z 69329 ']' 00:12:37.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.982 14:13:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.982 14:13:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.982 14:13:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.982 14:13:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.982 14:13:39 -- common/autotest_common.sh@10 -- # set +x 00:12:37.982 [2024-12-04 14:13:39.268951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.982 [2024-12-04 14:13:39.269069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69329 ] 00:12:37.982 [2024-12-04 14:13:39.417308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.242 [2024-12-04 14:13:39.630441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.242 [2024-12-04 14:13:39.630968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.242 [2024-12-04 14:13:39.631061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.620 14:13:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.620 14:13:40 -- common/autotest_common.sh@862 -- # return 0 00:12:39.620 14:13:40 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:12:39.620 14:13:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.620 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:12:39.620 [2024-12-04 14:13:40.744931] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:39.620 14:13:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.620 14:13:40 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:12:39.620 14:13:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.620 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:12:39.620 malloc0 00:12:39.620 14:13:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.620 14:13:40 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:12:39.620 14:13:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.620 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:12:39.620 [2024-12-04 14:13:40.847227] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:12:39.620 [2024-12-04 14:13:40.847325] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:12:39.620 [2024-12-04 14:13:40.847333] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:39.620 [2024-12-04 14:13:40.847342] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:39.620 [2024-12-04 14:13:40.855230] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:39.620 [2024-12-04 14:13:40.855253] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:39.620 [2024-12-04 14:13:40.863116] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:39.620 [2024-12-04 14:13:40.863255] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:39.620 [2024-12-04 14:13:40.879111] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:39.620 1 00:12:39.620 14:13:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.620 14:13:40 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:12:40.551 14:13:41 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69370 00:12:40.551 14:13:41 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:12:40.551 14:13:41 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:12:40.551 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:40.551 fio-3.35 00:12:40.551 Starting 1 process 00:12:45.816 14:13:46 -- ublk/ublk_recovery.sh@36 -- # kill -9 69329 00:12:45.816 14:13:46 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:12:51.106 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69329 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:12:51.106 14:13:51 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69481 00:12:51.106 14:13:51 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:51.106 14:13:51 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69481 00:12:51.106 14:13:51 -- common/autotest_common.sh@829 -- # '[' -z 69481 ']' 00:12:51.106 14:13:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.106 14:13:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.106 14:13:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.106 14:13:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.106 14:13:51 -- common/autotest_common.sh@10 -- # set +x 00:12:51.106 14:13:51 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:51.106 [2024-12-04 14:13:51.970767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:51.106 [2024-12-04 14:13:51.971227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69481 ] 00:12:51.106 [2024-12-04 14:13:52.118988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:51.106 [2024-12-04 14:13:52.296361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:51.106 [2024-12-04 14:13:52.297046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.106 [2024-12-04 14:13:52.297084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.042 14:13:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.042 14:13:53 -- common/autotest_common.sh@862 -- # return 0 00:12:52.042 14:13:53 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:12:52.042 14:13:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.042 14:13:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.043 [2024-12-04 14:13:53.470955] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:52.043 14:13:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.043 14:13:53 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:12:52.043 14:13:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.043 14:13:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.302 malloc0 00:12:52.302 14:13:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.302 14:13:53 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:12:52.302 14:13:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.302 14:13:53 -- common/autotest_common.sh@10 -- # set +x 00:12:52.302 [2024-12-04 14:13:53.573224] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:12:52.302 [2024-12-04 14:13:53.573262] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:52.302 [2024-12-04 14:13:53.573271] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:12:52.302 [2024-12-04 14:13:53.579103] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:12:52.302 [2024-12-04 14:13:53.579122] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:12:52.302 [2024-12-04 14:13:53.579196] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:12:52.302 1 00:12:52.302 14:13:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.302 14:13:53 -- ublk/ublk_recovery.sh@52 -- # wait 69370 00:13:18.861 [2024-12-04 14:14:17.009106] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:18.861 [2024-12-04 14:14:17.014975] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:18.861 [2024-12-04 14:14:17.022257] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:18.862 [2024-12-04 14:14:17.022282] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:13:40.792 00:13:40.792 fio_test: (groupid=0, jobs=1): err= 0: pid=69373: Wed Dec 4 14:14:42 2024 00:13:40.792 read: IOPS=14.8k, BW=57.8MiB/s (60.6MB/s)(3470MiB/60001msec) 00:13:40.792 slat (nsec): min=1038, max=508411, avg=4885.13, stdev=1391.23 00:13:40.792 clat (usec): min=739, max=30138k, avg=4247.65, stdev=253766.49 00:13:40.792 lat (usec): min=742, max=30138k, avg=4252.53, stdev=253766.49 00:13:40.792 clat percentiles (usec): 00:13:40.792 | 1.00th=[ 1647], 5.00th=[ 1745], 10.00th=[ 1762], 20.00th=[ 1795], 00:13:40.792 | 30.00th=[ 1811], 40.00th=[ 1827], 50.00th=[ 1844], 60.00th=[ 1909], 00:13:40.792 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 3064], 00:13:40.792 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 7242], 99.95th=[ 8455], 00:13:40.792 | 99.99th=[12780] 00:13:40.792 bw ( KiB/s): min=10000, max=134032, per=100.00%, avg=116839.63, stdev=21498.15, samples=60 00:13:40.792 iops : min= 2500, max=33508, avg=29209.92, stdev=5374.53, samples=60 00:13:40.792 write: IOPS=14.8k, BW=57.8MiB/s (60.6MB/s)(3465MiB/60001msec); 0 zone resets 00:13:40.792 slat (nsec): min=1046, max=115674, avg=4915.62, stdev=1265.90 00:13:40.792 clat (usec): min=543, max=30138k, avg=4393.28, stdev=257943.37 00:13:40.792 lat (usec): min=555, max=30138k, avg=4398.20, stdev=257943.37 00:13:40.792 clat percentiles (usec): 00:13:40.792 | 1.00th=[ 1680], 5.00th=[ 1827], 10.00th=[ 1844], 20.00th=[ 1876], 00:13:40.792 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1942], 60.00th=[ 1991], 00:13:40.792 | 70.00th=[ 2376], 80.00th=[ 2409], 90.00th=[ 2474], 95.00th=[ 2999], 00:13:40.792 | 99.00th=[ 5014], 99.50th=[ 5538], 99.90th=[ 7242], 99.95th=[ 8356], 00:13:40.792 | 99.99th=[12780] 00:13:40.792 bw ( KiB/s): min= 9552, max=134448, per=100.00%, avg=116655.75, stdev=21710.58, samples=60 00:13:40.792 iops : min= 2388, max=33612, avg=29163.92, stdev=5427.66, samples=60 00:13:40.792 lat (usec) : 750=0.01%, 1000=0.01% 00:13:40.792 lat (msec) : 2=60.83%, 4=36.47%, 10=2.66%, 20=0.03%, >=2000=0.01% 00:13:40.792 cpu : usr=3.32%, sys=14.98%, ctx=59497, majf=0, minf=13 00:13:40.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:40.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.792 issued rwts: total=888325,887086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.792 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.792 00:13:40.792 Run status group 0 (all jobs): 00:13:40.792 READ: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=3470MiB (3639MB), run=60001-60001msec 00:13:40.792 WRITE: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=3465MiB (3634MB), run=60001-60001msec 00:13:40.792 00:13:40.792 Disk stats (read/write): 00:13:40.792 ublkb1: ios=885954/884620, merge=0/0, ticks=3721323/3773719, in_queue=7495042, util=99.88% 00:13:40.792 14:14:42 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:13:40.792 14:14:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.792 14:14:42 -- common/autotest_common.sh@10 -- # set +x 00:13:40.792 [2024-12-04 14:14:42.137142] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.792 [2024-12-04 14:14:42.177211] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.792 [2024-12-04 14:14:42.177370] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.792 [2024-12-04 14:14:42.183117] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.792 [2024-12-04 14:14:42.183217] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:40.792 [2024-12-04 14:14:42.183226] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:40.792 14:14:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.792 14:14:42 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:13:40.792 14:14:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.792 14:14:42 -- common/autotest_common.sh@10 -- # set +x 00:13:40.792 [2024-12-04 14:14:42.197199] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:40.792 [2024-12-04 14:14:42.207103] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:40.792 [2024-12-04 14:14:42.207138] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:40.792 14:14:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.792 14:14:42 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:13:40.792 14:14:42 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:13:40.792 14:14:42 -- ublk/ublk_recovery.sh@14 -- # killprocess 69481 00:13:40.792 14:14:42 -- common/autotest_common.sh@936 -- # '[' -z 69481 ']' 00:13:40.792 14:14:42 -- common/autotest_common.sh@940 -- # kill -0 69481 00:13:40.792 14:14:42 -- common/autotest_common.sh@941 -- # uname 00:13:40.792 14:14:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.792 14:14:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69481 00:13:40.792 killing process with pid 69481 00:13:40.792 14:14:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:40.792 14:14:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:40.792 14:14:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69481' 00:13:40.792 14:14:42 -- common/autotest_common.sh@955 -- # kill 69481 00:13:40.792 14:14:42 -- common/autotest_common.sh@960 -- # wait 69481 00:13:42.165 [2024-12-04 14:14:43.407752] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:42.165 [2024-12-04 14:14:43.407797] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:43.103 00:13:43.103 real 1m5.265s 00:13:43.103 user 1m50.209s 00:13:43.103 sys 0m20.623s 00:13:43.103 14:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.103 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.103 ************************************ 00:13:43.103 END TEST ublk_recovery 00:13:43.103 ************************************ 00:13:43.103 14:14:44 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@255 -- # timing_exit lib 00:13:43.103 14:14:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.103 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.103 14:14:44 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:13:43.103 14:14:44 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:43.103 14:14:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:43.103 14:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.103 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:13:43.103 ************************************ 00:13:43.103 START TEST ftl 00:13:43.103 ************************************ 00:13:43.103 14:14:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:43.103 * Looking for test storage... 00:13:43.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:13:43.103 14:14:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:43.103 14:14:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:43.103 14:14:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:43.103 14:14:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:43.103 14:14:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:43.103 14:14:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:43.103 14:14:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:43.103 14:14:44 -- scripts/common.sh@335 -- # IFS=.-: 00:13:43.103 14:14:44 -- scripts/common.sh@335 -- # read -ra ver1 00:13:43.103 14:14:44 -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.103 14:14:44 -- scripts/common.sh@336 -- # read -ra ver2 00:13:43.103 14:14:44 -- scripts/common.sh@337 -- # local 'op=<' 00:13:43.103 14:14:44 -- scripts/common.sh@339 -- # ver1_l=2 00:13:43.103 14:14:44 -- scripts/common.sh@340 -- # ver2_l=1 00:13:43.103 14:14:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:43.103 14:14:44 -- scripts/common.sh@343 -- # case "$op" in 00:13:43.103 14:14:44 -- scripts/common.sh@344 -- # : 1 00:13:43.103 14:14:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:43.103 14:14:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.103 14:14:44 -- scripts/common.sh@364 -- # decimal 1 00:13:43.103 14:14:44 -- scripts/common.sh@352 -- # local d=1 00:13:43.103 14:14:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.103 14:14:44 -- scripts/common.sh@354 -- # echo 1 00:13:43.103 14:14:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:43.103 14:14:44 -- scripts/common.sh@365 -- # decimal 2 00:13:43.103 14:14:44 -- scripts/common.sh@352 -- # local d=2 00:13:43.103 14:14:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.103 14:14:44 -- scripts/common.sh@354 -- # echo 2 00:13:43.103 14:14:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:43.103 14:14:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:43.103 14:14:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:43.103 14:14:44 -- scripts/common.sh@367 -- # return 0 00:13:43.103 14:14:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.103 14:14:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.103 --rc genhtml_branch_coverage=1 00:13:43.103 --rc genhtml_function_coverage=1 00:13:43.103 --rc genhtml_legend=1 00:13:43.103 --rc geninfo_all_blocks=1 00:13:43.103 --rc geninfo_unexecuted_blocks=1 00:13:43.103 00:13:43.103 ' 00:13:43.103 14:14:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.103 --rc genhtml_branch_coverage=1 00:13:43.103 --rc genhtml_function_coverage=1 00:13:43.103 --rc genhtml_legend=1 00:13:43.103 --rc geninfo_all_blocks=1 00:13:43.103 --rc geninfo_unexecuted_blocks=1 00:13:43.103 00:13:43.103 ' 00:13:43.103 14:14:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.103 --rc genhtml_branch_coverage=1 00:13:43.103 --rc genhtml_function_coverage=1 00:13:43.103 --rc genhtml_legend=1 00:13:43.103 --rc geninfo_all_blocks=1 00:13:43.103 --rc geninfo_unexecuted_blocks=1 00:13:43.103 00:13:43.103 ' 00:13:43.103 14:14:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:43.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.103 --rc genhtml_branch_coverage=1 00:13:43.103 --rc genhtml_function_coverage=1 00:13:43.103 --rc genhtml_legend=1 00:13:43.103 --rc geninfo_all_blocks=1 00:13:43.103 --rc geninfo_unexecuted_blocks=1 00:13:43.103 00:13:43.103 ' 00:13:43.103 14:14:44 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:13:43.103 14:14:44 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:13:43.103 14:14:44 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:13:43.103 14:14:44 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:13:43.103 14:14:44 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:13:43.103 14:14:44 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:43.103 14:14:44 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.103 14:14:44 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:13:43.103 14:14:44 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:13:43.103 14:14:44 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.103 14:14:44 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.104 14:14:44 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:13:43.104 14:14:44 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:13:43.104 14:14:44 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:43.104 14:14:44 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:43.104 14:14:44 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:13:43.104 14:14:44 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:13:43.104 14:14:44 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.104 14:14:44 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:43.104 14:14:44 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:13:43.104 14:14:44 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:13:43.104 14:14:44 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:43.104 14:14:44 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:43.104 14:14:44 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:43.104 14:14:44 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:43.104 14:14:44 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:13:43.104 14:14:44 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:13:43.104 14:14:44 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:43.104 14:14:44 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:43.104 14:14:44 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.104 14:14:44 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:13:43.104 14:14:44 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:13:43.104 14:14:44 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:13:43.104 14:14:44 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:13:43.104 14:14:44 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:43.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:43.674 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:43.674 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:43.674 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:43.674 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:43.674 14:14:45 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70298 00:13:43.674 14:14:45 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:43.674 14:14:45 -- ftl/ftl.sh@38 -- # waitforlisten 70298 00:13:43.674 14:14:45 -- common/autotest_common.sh@829 -- # '[' -z 70298 ']' 00:13:43.674 14:14:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.674 14:14:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.674 14:14:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.674 14:14:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.674 14:14:45 -- common/autotest_common.sh@10 -- # set +x 00:13:43.674 [2024-12-04 14:14:45.076060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:43.674 [2024-12-04 14:14:45.076322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70298 ] 00:13:43.934 [2024-12-04 14:14:45.226495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.195 [2024-12-04 14:14:45.401821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.195 [2024-12-04 14:14:45.402149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.453 14:14:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.453 14:14:45 -- common/autotest_common.sh@862 -- # return 0 00:13:44.453 14:14:45 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:13:44.712 14:14:46 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:45.295 14:14:46 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:13:45.295 14:14:46 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:45.900 14:14:47 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:13:45.900 14:14:47 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:45.901 14:14:47 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:13:45.901 14:14:47 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:13:45.901 14:14:47 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:13:45.901 14:14:47 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:13:45.901 14:14:47 -- ftl/ftl.sh@50 -- # break 00:13:45.901 14:14:47 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:13:45.901 14:14:47 -- ftl/ftl.sh@59 -- # base_size=1310720 00:13:45.901 14:14:47 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:13:45.901 14:14:47 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:13:46.161 14:14:47 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:13:46.161 14:14:47 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:13:46.161 14:14:47 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:13:46.161 14:14:47 -- ftl/ftl.sh@63 -- # break 00:13:46.161 14:14:47 -- ftl/ftl.sh@66 -- # killprocess 70298 00:13:46.161 14:14:47 -- common/autotest_common.sh@936 -- # '[' -z 70298 ']' 00:13:46.161 14:14:47 -- common/autotest_common.sh@940 -- # kill -0 70298 00:13:46.161 14:14:47 -- common/autotest_common.sh@941 -- # uname 00:13:46.161 14:14:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.161 14:14:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70298 00:13:46.161 killing process with pid 70298 00:13:46.161 14:14:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:46.161 14:14:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:46.161 14:14:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70298' 00:13:46.161 14:14:47 -- common/autotest_common.sh@955 -- # kill 70298 00:13:46.161 14:14:47 -- common/autotest_common.sh@960 -- # wait 70298 00:13:47.550 14:14:48 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:13:47.550 14:14:48 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:13:47.550 14:14:48 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:13:47.550 14:14:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:47.550 14:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.550 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:13:47.550 ************************************ 00:13:47.550 START TEST ftl_fio_basic 00:13:47.550 ************************************ 00:13:47.550 14:14:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:13:47.550 * Looking for test storage... 00:13:47.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:13:47.550 14:14:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:47.550 14:14:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:47.550 14:14:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:47.810 14:14:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:47.810 14:14:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:47.810 14:14:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:47.810 14:14:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:47.810 14:14:49 -- scripts/common.sh@335 -- # IFS=.-: 00:13:47.810 14:14:49 -- scripts/common.sh@335 -- # read -ra ver1 00:13:47.810 14:14:49 -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.810 14:14:49 -- scripts/common.sh@336 -- # read -ra ver2 00:13:47.810 14:14:49 -- scripts/common.sh@337 -- # local 'op=<' 00:13:47.810 14:14:49 -- scripts/common.sh@339 -- # ver1_l=2 00:13:47.810 14:14:49 -- scripts/common.sh@340 -- # ver2_l=1 00:13:47.810 14:14:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:47.810 14:14:49 -- scripts/common.sh@343 -- # case "$op" in 00:13:47.810 14:14:49 -- scripts/common.sh@344 -- # : 1 00:13:47.810 14:14:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:47.810 14:14:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.810 14:14:49 -- scripts/common.sh@364 -- # decimal 1 00:13:47.810 14:14:49 -- scripts/common.sh@352 -- # local d=1 00:13:47.810 14:14:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.810 14:14:49 -- scripts/common.sh@354 -- # echo 1 00:13:47.810 14:14:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:47.810 14:14:49 -- scripts/common.sh@365 -- # decimal 2 00:13:47.810 14:14:49 -- scripts/common.sh@352 -- # local d=2 00:13:47.810 14:14:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.810 14:14:49 -- scripts/common.sh@354 -- # echo 2 00:13:47.810 14:14:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:47.810 14:14:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:47.810 14:14:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:47.810 14:14:49 -- scripts/common.sh@367 -- # return 0 00:13:47.810 14:14:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.810 14:14:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.810 --rc genhtml_branch_coverage=1 00:13:47.810 --rc genhtml_function_coverage=1 00:13:47.810 --rc genhtml_legend=1 00:13:47.810 --rc geninfo_all_blocks=1 00:13:47.810 --rc geninfo_unexecuted_blocks=1 00:13:47.810 00:13:47.810 ' 00:13:47.810 14:14:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.810 --rc genhtml_branch_coverage=1 00:13:47.810 --rc genhtml_function_coverage=1 00:13:47.810 --rc genhtml_legend=1 00:13:47.810 --rc geninfo_all_blocks=1 00:13:47.810 --rc geninfo_unexecuted_blocks=1 00:13:47.810 00:13:47.810 ' 00:13:47.810 14:14:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:47.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.810 --rc genhtml_branch_coverage=1 00:13:47.810 --rc genhtml_function_coverage=1 00:13:47.810 --rc genhtml_legend=1 00:13:47.810 --rc geninfo_all_blocks=1 00:13:47.811 --rc geninfo_unexecuted_blocks=1 00:13:47.811 00:13:47.811 ' 00:13:47.811 14:14:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:47.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.811 --rc genhtml_branch_coverage=1 00:13:47.811 --rc genhtml_function_coverage=1 00:13:47.811 --rc genhtml_legend=1 00:13:47.811 --rc geninfo_all_blocks=1 00:13:47.811 --rc geninfo_unexecuted_blocks=1 00:13:47.811 00:13:47.811 ' 00:13:47.811 14:14:49 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:13:47.811 14:14:49 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:13:47.811 14:14:49 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:13:47.811 14:14:49 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:13:47.811 14:14:49 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:13:47.811 14:14:49 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:47.811 14:14:49 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:47.811 14:14:49 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:13:47.811 14:14:49 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:13:47.811 14:14:49 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.811 14:14:49 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.811 14:14:49 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:13:47.811 14:14:49 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:13:47.811 14:14:49 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:47.811 14:14:49 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:13:47.811 14:14:49 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:13:47.811 14:14:49 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:13:47.811 14:14:49 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.811 14:14:49 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.811 14:14:49 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:13:47.811 14:14:49 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:13:47.811 14:14:49 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:47.811 14:14:49 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:13:47.811 14:14:49 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:47.811 14:14:49 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:13:47.811 14:14:49 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:13:47.811 14:14:49 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:13:47.811 14:14:49 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:47.811 14:14:49 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:47.811 14:14:49 -- ftl/fio.sh@11 -- # declare -A suite 00:13:47.811 14:14:49 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:13:47.811 14:14:49 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:13:47.811 14:14:49 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:13:47.811 14:14:49 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:47.811 14:14:49 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:13:47.811 14:14:49 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:13:47.811 14:14:49 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:13:47.811 14:14:49 -- ftl/fio.sh@26 -- # uuid= 00:13:47.811 14:14:49 -- ftl/fio.sh@27 -- # timeout=240 00:13:47.811 14:14:49 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:13:47.811 14:14:49 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:13:47.811 14:14:49 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:13:47.811 14:14:49 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:13:47.811 14:14:49 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:13:47.811 14:14:49 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:13:47.811 14:14:49 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:13:47.811 14:14:49 -- ftl/fio.sh@45 -- # svcpid=70429 00:13:47.811 14:14:49 -- ftl/fio.sh@46 -- # waitforlisten 70429 00:13:47.811 14:14:49 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:13:47.811 14:14:49 -- common/autotest_common.sh@829 -- # '[' -z 70429 ']' 00:13:47.811 14:14:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.811 14:14:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.811 14:14:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.811 14:14:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.811 14:14:49 -- common/autotest_common.sh@10 -- # set +x 00:13:47.811 [2024-12-04 14:14:49.123280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:47.811 [2024-12-04 14:14:49.123504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70429 ] 00:13:47.811 [2024-12-04 14:14:49.265478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.070 [2024-12-04 14:14:49.406426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.070 [2024-12-04 14:14:49.406797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.070 [2024-12-04 14:14:49.407155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.070 [2024-12-04 14:14:49.407185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.636 14:14:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.636 14:14:49 -- common/autotest_common.sh@862 -- # return 0 00:13:48.636 14:14:49 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:13:48.636 14:14:49 -- ftl/common.sh@54 -- # local name=nvme0 00:13:48.636 14:14:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:13:48.636 14:14:49 -- ftl/common.sh@56 -- # local size=103424 00:13:48.636 14:14:49 -- ftl/common.sh@59 -- # local base_bdev 00:13:48.636 14:14:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:13:48.894 14:14:50 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:13:48.894 14:14:50 -- ftl/common.sh@62 -- # local base_size 00:13:48.894 14:14:50 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:13:48.894 14:14:50 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:13:48.894 14:14:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:48.894 14:14:50 -- common/autotest_common.sh@1369 -- # local bs 00:13:48.894 14:14:50 -- common/autotest_common.sh@1370 -- # local nb 00:13:48.894 14:14:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:13:49.153 14:14:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:49.153 { 00:13:49.153 "name": "nvme0n1", 00:13:49.153 "aliases": [ 00:13:49.153 "c4364a75-64ab-4a87-88f5-812292a110f6" 00:13:49.153 ], 00:13:49.153 "product_name": "NVMe disk", 00:13:49.153 "block_size": 4096, 00:13:49.153 "num_blocks": 1310720, 00:13:49.153 "uuid": "c4364a75-64ab-4a87-88f5-812292a110f6", 00:13:49.153 "assigned_rate_limits": { 00:13:49.153 "rw_ios_per_sec": 0, 00:13:49.153 "rw_mbytes_per_sec": 0, 00:13:49.153 "r_mbytes_per_sec": 0, 00:13:49.153 "w_mbytes_per_sec": 0 00:13:49.153 }, 00:13:49.153 "claimed": false, 00:13:49.153 "zoned": false, 00:13:49.153 "supported_io_types": { 00:13:49.153 "read": true, 00:13:49.153 "write": true, 00:13:49.153 "unmap": true, 00:13:49.153 "write_zeroes": true, 00:13:49.153 "flush": true, 00:13:49.153 "reset": true, 00:13:49.153 "compare": true, 00:13:49.153 "compare_and_write": false, 00:13:49.153 "abort": true, 00:13:49.153 "nvme_admin": true, 00:13:49.153 "nvme_io": true 00:13:49.153 }, 00:13:49.153 "driver_specific": { 00:13:49.153 "nvme": [ 00:13:49.153 { 00:13:49.153 "pci_address": "0000:00:07.0", 00:13:49.153 "trid": { 00:13:49.153 "trtype": "PCIe", 00:13:49.153 "traddr": "0000:00:07.0" 00:13:49.153 }, 00:13:49.153 "ctrlr_data": { 00:13:49.153 "cntlid": 0, 00:13:49.153 "vendor_id": "0x1b36", 00:13:49.153 "model_number": "QEMU NVMe Ctrl", 00:13:49.153 "serial_number": "12341", 00:13:49.153 "firmware_revision": "8.0.0", 00:13:49.153 "subnqn": "nqn.2019-08.org.qemu:12341", 00:13:49.153 "oacs": { 00:13:49.153 "security": 0, 00:13:49.153 "format": 1, 00:13:49.153 "firmware": 0, 00:13:49.153 "ns_manage": 1 00:13:49.153 }, 00:13:49.153 "multi_ctrlr": false, 00:13:49.153 "ana_reporting": false 00:13:49.153 }, 00:13:49.153 "vs": { 00:13:49.153 "nvme_version": "1.4" 00:13:49.153 }, 00:13:49.153 "ns_data": { 00:13:49.153 "id": 1, 00:13:49.153 "can_share": false 00:13:49.153 } 00:13:49.153 } 00:13:49.153 ], 00:13:49.153 "mp_policy": "active_passive" 00:13:49.153 } 00:13:49.153 } 00:13:49.153 ]' 00:13:49.153 14:14:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:49.153 14:14:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:49.153 14:14:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:49.153 14:14:50 -- common/autotest_common.sh@1373 -- # nb=1310720 00:13:49.153 14:14:50 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:13:49.153 14:14:50 -- common/autotest_common.sh@1377 -- # echo 5120 00:13:49.153 14:14:50 -- ftl/common.sh@63 -- # base_size=5120 00:13:49.153 14:14:50 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:13:49.153 14:14:50 -- ftl/common.sh@67 -- # clear_lvols 00:13:49.153 14:14:50 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:49.153 14:14:50 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:13:49.411 14:14:50 -- ftl/common.sh@28 -- # stores= 00:13:49.411 14:14:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:13:49.411 14:14:50 -- ftl/common.sh@68 -- # lvs=eb6e116b-3d33-45c9-80d0-0402ab96dc7c 00:13:49.412 14:14:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eb6e116b-3d33-45c9-80d0-0402ab96dc7c 00:13:49.669 14:14:51 -- ftl/fio.sh@48 -- # split_bdev=11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:49.670 14:14:51 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:49.670 14:14:51 -- ftl/common.sh@35 -- # local name=nvc0 00:13:49.670 14:14:51 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:13:49.670 14:14:51 -- ftl/common.sh@37 -- # local base_bdev=11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:49.670 14:14:51 -- ftl/common.sh@38 -- # local cache_size= 00:13:49.670 14:14:51 -- ftl/common.sh@41 -- # get_bdev_size 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:49.670 14:14:51 -- common/autotest_common.sh@1367 -- # local bdev_name=11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:49.670 14:14:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:49.670 14:14:51 -- common/autotest_common.sh@1369 -- # local bs 00:13:49.670 14:14:51 -- common/autotest_common.sh@1370 -- # local nb 00:13:49.670 14:14:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:49.927 14:14:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:49.927 { 00:13:49.927 "name": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:49.927 "aliases": [ 00:13:49.927 "lvs/nvme0n1p0" 00:13:49.927 ], 00:13:49.927 "product_name": "Logical Volume", 00:13:49.927 "block_size": 4096, 00:13:49.927 "num_blocks": 26476544, 00:13:49.927 "uuid": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:49.927 "assigned_rate_limits": { 00:13:49.927 "rw_ios_per_sec": 0, 00:13:49.927 "rw_mbytes_per_sec": 0, 00:13:49.927 "r_mbytes_per_sec": 0, 00:13:49.927 "w_mbytes_per_sec": 0 00:13:49.927 }, 00:13:49.927 "claimed": false, 00:13:49.927 "zoned": false, 00:13:49.927 "supported_io_types": { 00:13:49.927 "read": true, 00:13:49.927 "write": true, 00:13:49.927 "unmap": true, 00:13:49.927 "write_zeroes": true, 00:13:49.927 "flush": false, 00:13:49.927 "reset": true, 00:13:49.927 "compare": false, 00:13:49.927 "compare_and_write": false, 00:13:49.927 "abort": false, 00:13:49.927 "nvme_admin": false, 00:13:49.927 "nvme_io": false 00:13:49.927 }, 00:13:49.927 "driver_specific": { 00:13:49.927 "lvol": { 00:13:49.927 "lvol_store_uuid": "eb6e116b-3d33-45c9-80d0-0402ab96dc7c", 00:13:49.927 "base_bdev": "nvme0n1", 00:13:49.927 "thin_provision": true, 00:13:49.927 "snapshot": false, 00:13:49.927 "clone": false, 00:13:49.927 "esnap_clone": false 00:13:49.927 } 00:13:49.927 } 00:13:49.927 } 00:13:49.927 ]' 00:13:49.927 14:14:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:49.927 14:14:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:49.927 14:14:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:49.927 14:14:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:13:49.927 14:14:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:13:49.927 14:14:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:13:49.927 14:14:51 -- ftl/common.sh@41 -- # local base_size=5171 00:13:49.927 14:14:51 -- ftl/common.sh@44 -- # local nvc_bdev 00:13:49.927 14:14:51 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:13:50.185 14:14:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:13:50.185 14:14:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:13:50.185 14:14:51 -- ftl/common.sh@48 -- # get_bdev_size 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:50.185 14:14:51 -- common/autotest_common.sh@1367 -- # local bdev_name=11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:50.185 14:14:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:50.185 14:14:51 -- common/autotest_common.sh@1369 -- # local bs 00:13:50.185 14:14:51 -- common/autotest_common.sh@1370 -- # local nb 00:13:50.185 14:14:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:50.442 14:14:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:50.442 { 00:13:50.442 "name": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:50.442 "aliases": [ 00:13:50.442 "lvs/nvme0n1p0" 00:13:50.442 ], 00:13:50.442 "product_name": "Logical Volume", 00:13:50.442 "block_size": 4096, 00:13:50.442 "num_blocks": 26476544, 00:13:50.442 "uuid": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:50.442 "assigned_rate_limits": { 00:13:50.442 "rw_ios_per_sec": 0, 00:13:50.442 "rw_mbytes_per_sec": 0, 00:13:50.442 "r_mbytes_per_sec": 0, 00:13:50.442 "w_mbytes_per_sec": 0 00:13:50.442 }, 00:13:50.442 "claimed": false, 00:13:50.442 "zoned": false, 00:13:50.442 "supported_io_types": { 00:13:50.442 "read": true, 00:13:50.442 "write": true, 00:13:50.442 "unmap": true, 00:13:50.442 "write_zeroes": true, 00:13:50.442 "flush": false, 00:13:50.442 "reset": true, 00:13:50.442 "compare": false, 00:13:50.442 "compare_and_write": false, 00:13:50.442 "abort": false, 00:13:50.442 "nvme_admin": false, 00:13:50.442 "nvme_io": false 00:13:50.442 }, 00:13:50.442 "driver_specific": { 00:13:50.442 "lvol": { 00:13:50.442 "lvol_store_uuid": "eb6e116b-3d33-45c9-80d0-0402ab96dc7c", 00:13:50.442 "base_bdev": "nvme0n1", 00:13:50.442 "thin_provision": true, 00:13:50.442 "snapshot": false, 00:13:50.442 "clone": false, 00:13:50.442 "esnap_clone": false 00:13:50.442 } 00:13:50.442 } 00:13:50.442 } 00:13:50.442 ]' 00:13:50.442 14:14:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:50.442 14:14:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:50.442 14:14:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:50.442 14:14:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:13:50.442 14:14:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:13:50.442 14:14:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:13:50.442 14:14:51 -- ftl/common.sh@48 -- # cache_size=5171 00:13:50.442 14:14:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:13:50.700 14:14:51 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:13:50.700 14:14:51 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:13:50.700 14:14:51 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:13:50.700 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:13:50.700 14:14:51 -- ftl/fio.sh@56 -- # get_bdev_size 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:50.700 14:14:51 -- common/autotest_common.sh@1367 -- # local bdev_name=11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:50.700 14:14:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:13:50.700 14:14:51 -- common/autotest_common.sh@1369 -- # local bs 00:13:50.700 14:14:51 -- common/autotest_common.sh@1370 -- # local nb 00:13:50.700 14:14:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11f6955a-cc4e-4d75-b970-b85770258c3b 00:13:50.700 14:14:52 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:13:50.700 { 00:13:50.700 "name": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:50.700 "aliases": [ 00:13:50.700 "lvs/nvme0n1p0" 00:13:50.700 ], 00:13:50.700 "product_name": "Logical Volume", 00:13:50.700 "block_size": 4096, 00:13:50.700 "num_blocks": 26476544, 00:13:50.700 "uuid": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:50.700 "assigned_rate_limits": { 00:13:50.700 "rw_ios_per_sec": 0, 00:13:50.700 "rw_mbytes_per_sec": 0, 00:13:50.700 "r_mbytes_per_sec": 0, 00:13:50.700 "w_mbytes_per_sec": 0 00:13:50.700 }, 00:13:50.700 "claimed": false, 00:13:50.700 "zoned": false, 00:13:50.700 "supported_io_types": { 00:13:50.700 "read": true, 00:13:50.700 "write": true, 00:13:50.700 "unmap": true, 00:13:50.700 "write_zeroes": true, 00:13:50.700 "flush": false, 00:13:50.700 "reset": true, 00:13:50.700 "compare": false, 00:13:50.700 "compare_and_write": false, 00:13:50.700 "abort": false, 00:13:50.700 "nvme_admin": false, 00:13:50.700 "nvme_io": false 00:13:50.700 }, 00:13:50.700 "driver_specific": { 00:13:50.700 "lvol": { 00:13:50.700 "lvol_store_uuid": "eb6e116b-3d33-45c9-80d0-0402ab96dc7c", 00:13:50.700 "base_bdev": "nvme0n1", 00:13:50.700 "thin_provision": true, 00:13:50.700 "snapshot": false, 00:13:50.700 "clone": false, 00:13:50.700 "esnap_clone": false 00:13:50.700 } 00:13:50.700 } 00:13:50.700 } 00:13:50.700 ]' 00:13:50.700 14:14:52 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:13:50.959 14:14:52 -- common/autotest_common.sh@1372 -- # bs=4096 00:13:50.959 14:14:52 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:13:50.959 14:14:52 -- common/autotest_common.sh@1373 -- # nb=26476544 00:13:50.959 14:14:52 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:13:50.959 14:14:52 -- common/autotest_common.sh@1377 -- # echo 103424 00:13:50.959 14:14:52 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:13:50.959 14:14:52 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:13:50.959 14:14:52 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 11f6955a-cc4e-4d75-b970-b85770258c3b -c nvc0n1p0 --l2p_dram_limit 60 00:13:50.959 [2024-12-04 14:14:52.384033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.384170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:13:50.959 [2024-12-04 14:14:52.384191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:13:50.959 [2024-12-04 14:14:52.384198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.959 [2024-12-04 14:14:52.384262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.384270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:13:50.959 [2024-12-04 14:14:52.384278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:13:50.959 [2024-12-04 14:14:52.384284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.959 [2024-12-04 14:14:52.384304] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:13:50.959 [2024-12-04 14:14:52.384850] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:13:50.959 [2024-12-04 14:14:52.384865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.384871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:13:50.959 [2024-12-04 14:14:52.384879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:13:50.959 [2024-12-04 14:14:52.384884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.959 [2024-12-04 14:14:52.384940] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f4ab0eb-8883-41c9-abcb-0989134a40cf 00:13:50.959 [2024-12-04 14:14:52.385880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.385910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:13:50.959 [2024-12-04 14:14:52.385918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:13:50.959 [2024-12-04 14:14:52.385925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.959 [2024-12-04 14:14:52.390696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.390729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:13:50.959 [2024-12-04 14:14:52.390736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:13:50.959 [2024-12-04 14:14:52.390744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.959 [2024-12-04 14:14:52.390811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.390819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:13:50.959 [2024-12-04 14:14:52.390826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:13:50.959 [2024-12-04 14:14:52.390834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.959 [2024-12-04 14:14:52.390880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.959 [2024-12-04 14:14:52.390888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:13:50.959 [2024-12-04 14:14:52.390894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:13:50.960 [2024-12-04 14:14:52.390912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.960 [2024-12-04 14:14:52.390936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:13:50.960 [2024-12-04 14:14:52.393864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.960 [2024-12-04 14:14:52.393888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:13:50.960 [2024-12-04 14:14:52.393896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:13:50.960 [2024-12-04 14:14:52.393902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.960 [2024-12-04 14:14:52.393936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.960 [2024-12-04 14:14:52.393942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:13:50.960 [2024-12-04 14:14:52.393949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:13:50.960 [2024-12-04 14:14:52.393955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.960 [2024-12-04 14:14:52.393973] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:13:50.960 [2024-12-04 14:14:52.394071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:13:50.960 [2024-12-04 14:14:52.394083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:13:50.960 [2024-12-04 14:14:52.394103] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:13:50.960 [2024-12-04 14:14:52.394112] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394119] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394127] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:13:50.960 [2024-12-04 14:14:52.394133] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:13:50.960 [2024-12-04 14:14:52.394143] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:13:50.960 [2024-12-04 14:14:52.394148] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:13:50.960 [2024-12-04 14:14:52.394156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.960 [2024-12-04 14:14:52.394161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:13:50.960 [2024-12-04 14:14:52.394169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:13:50.960 [2024-12-04 14:14:52.394174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.960 [2024-12-04 14:14:52.394228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.960 [2024-12-04 14:14:52.394234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:13:50.960 [2024-12-04 14:14:52.394241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:13:50.960 [2024-12-04 14:14:52.394246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.960 [2024-12-04 14:14:52.394316] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:13:50.960 [2024-12-04 14:14:52.394323] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:13:50.960 [2024-12-04 14:14:52.394330] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:13:50.960 [2024-12-04 14:14:52.394347] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394359] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:13:50.960 [2024-12-04 14:14:52.394365] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:13:50.960 [2024-12-04 14:14:52.394376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:13:50.960 [2024-12-04 14:14:52.394381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:13:50.960 [2024-12-04 14:14:52.394388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:13:50.960 [2024-12-04 14:14:52.394393] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:13:50.960 [2024-12-04 14:14:52.394399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:13:50.960 [2024-12-04 14:14:52.394404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:13:50.960 [2024-12-04 14:14:52.394417] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:13:50.960 [2024-12-04 14:14:52.394423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394428] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:13:50.960 [2024-12-04 14:14:52.394434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:13:50.960 [2024-12-04 14:14:52.394439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394445] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:13:50.960 [2024-12-04 14:14:52.394451] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:13:50.960 [2024-12-04 14:14:52.394468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394479] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:13:50.960 [2024-12-04 14:14:52.394484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:13:50.960 [2024-12-04 14:14:52.394502] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:13:50.960 [2024-12-04 14:14:52.394529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:13:50.960 [2024-12-04 14:14:52.394541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:13:50.960 [2024-12-04 14:14:52.394548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:13:50.960 [2024-12-04 14:14:52.394552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:13:50.960 [2024-12-04 14:14:52.394558] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:13:50.960 [2024-12-04 14:14:52.394564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:13:50.960 [2024-12-04 14:14:52.394570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:13:50.960 [2024-12-04 14:14:52.394582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:13:50.960 [2024-12-04 14:14:52.394587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:13:50.960 [2024-12-04 14:14:52.394593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:13:50.960 [2024-12-04 14:14:52.394598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:13:50.960 [2024-12-04 14:14:52.394610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:13:50.960 [2024-12-04 14:14:52.394615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:13:50.960 [2024-12-04 14:14:52.394622] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:13:50.960 [2024-12-04 14:14:52.394629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:13:50.960 [2024-12-04 14:14:52.394639] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:13:50.960 [2024-12-04 14:14:52.394645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:13:50.960 [2024-12-04 14:14:52.394651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:13:50.960 [2024-12-04 14:14:52.394656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:13:50.960 [2024-12-04 14:14:52.394663] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:13:50.960 [2024-12-04 14:14:52.394669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:13:50.960 [2024-12-04 14:14:52.394675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:13:50.960 [2024-12-04 14:14:52.394680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:13:50.960 [2024-12-04 14:14:52.394687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:13:50.960 [2024-12-04 14:14:52.394692] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:13:50.960 [2024-12-04 14:14:52.394699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:13:50.960 [2024-12-04 14:14:52.394705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:13:50.960 [2024-12-04 14:14:52.394713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:13:50.960 [2024-12-04 14:14:52.394718] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:13:50.960 [2024-12-04 14:14:52.394731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:13:50.960 [2024-12-04 14:14:52.394738] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:13:50.960 [2024-12-04 14:14:52.394745] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:13:50.960 [2024-12-04 14:14:52.394750] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:13:50.961 [2024-12-04 14:14:52.394757] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:13:50.961 [2024-12-04 14:14:52.394763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.961 [2024-12-04 14:14:52.394769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:13:50.961 [2024-12-04 14:14:52.394775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:13:50.961 [2024-12-04 14:14:52.394781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.961 [2024-12-04 14:14:52.406671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.961 [2024-12-04 14:14:52.406707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:13:50.961 [2024-12-04 14:14:52.406715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.832 ms 00:13:50.961 [2024-12-04 14:14:52.406722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:50.961 [2024-12-04 14:14:52.406798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:50.961 [2024-12-04 14:14:52.406807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:13:50.961 [2024-12-04 14:14:52.406813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:13:50.961 [2024-12-04 14:14:52.406820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.431796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.431904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:13:51.219 [2024-12-04 14:14:52.431917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.936 ms 00:13:51.219 [2024-12-04 14:14:52.431925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.431955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.431964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:13:51.219 [2024-12-04 14:14:52.431971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:13:51.219 [2024-12-04 14:14:52.431978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.432303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.432322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:13:51.219 [2024-12-04 14:14:52.432329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:13:51.219 [2024-12-04 14:14:52.432336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.432433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.432443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:13:51.219 [2024-12-04 14:14:52.432449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:13:51.219 [2024-12-04 14:14:52.432456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.460004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.460077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:13:51.219 [2024-12-04 14:14:52.460120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.528 ms 00:13:51.219 [2024-12-04 14:14:52.460135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.472206] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:13:51.219 [2024-12-04 14:14:52.484170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.484198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:13:51.219 [2024-12-04 14:14:52.484208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.886 ms 00:13:51.219 [2024-12-04 14:14:52.484214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.533307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:51.219 [2024-12-04 14:14:52.533426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:13:51.219 [2024-12-04 14:14:52.533443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.060 ms 00:13:51.219 [2024-12-04 14:14:52.533449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:51.219 [2024-12-04 14:14:52.533486] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:13:51.219 [2024-12-04 14:14:52.533495] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:13:54.499 [2024-12-04 14:14:55.561443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.561497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:13:54.499 [2024-12-04 14:14:55.561514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3027.944 ms 00:13:54.499 [2024-12-04 14:14:55.561522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.561710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.561721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:13:54.499 [2024-12-04 14:14:55.561732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:13:54.499 [2024-12-04 14:14:55.561740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.585103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.585138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:13:54.499 [2024-12-04 14:14:55.585152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.314 ms 00:13:54.499 [2024-12-04 14:14:55.585160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.607130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.607167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:13:54.499 [2024-12-04 14:14:55.607182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.930 ms 00:13:54.499 [2024-12-04 14:14:55.607190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.607502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.607518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:13:54.499 [2024-12-04 14:14:55.607528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:13:54.499 [2024-12-04 14:14:55.607535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.676680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.676819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:13:54.499 [2024-12-04 14:14:55.676842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.108 ms 00:13:54.499 [2024-12-04 14:14:55.676851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.700644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.700761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:13:54.499 [2024-12-04 14:14:55.700830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.766 ms 00:13:54.499 [2024-12-04 14:14:55.700853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.704438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.704550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:13:54.499 [2024-12-04 14:14:55.704613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.542 ms 00:13:54.499 [2024-12-04 14:14:55.704636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.727927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.728044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:13:54.499 [2024-12-04 14:14:55.728119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.239 ms 00:13:54.499 [2024-12-04 14:14:55.728143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.499 [2024-12-04 14:14:55.728195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.499 [2024-12-04 14:14:55.728506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:13:54.499 [2024-12-04 14:14:55.728602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:13:54.499 [2024-12-04 14:14:55.728630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.500 [2024-12-04 14:14:55.728744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:54.500 [2024-12-04 14:14:55.728779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:13:54.500 [2024-12-04 14:14:55.728839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:13:54.500 [2024-12-04 14:14:55.728861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:54.500 [2024-12-04 14:14:55.729816] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3345.359 ms, result 0 00:13:54.500 { 00:13:54.500 "name": "ftl0", 00:13:54.500 "uuid": "1f4ab0eb-8883-41c9-abcb-0989134a40cf" 00:13:54.500 } 00:13:54.500 14:14:55 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:13:54.500 14:14:55 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:13:54.500 14:14:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:54.500 14:14:55 -- common/autotest_common.sh@899 -- # local i 00:13:54.500 14:14:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:54.500 14:14:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:54.500 14:14:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:54.500 14:14:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:13:54.757 [ 00:13:54.757 { 00:13:54.757 "name": "ftl0", 00:13:54.757 "aliases": [ 00:13:54.757 "1f4ab0eb-8883-41c9-abcb-0989134a40cf" 00:13:54.757 ], 00:13:54.757 "product_name": "FTL disk", 00:13:54.757 "block_size": 4096, 00:13:54.757 "num_blocks": 20971520, 00:13:54.757 "uuid": "1f4ab0eb-8883-41c9-abcb-0989134a40cf", 00:13:54.757 "assigned_rate_limits": { 00:13:54.757 "rw_ios_per_sec": 0, 00:13:54.757 "rw_mbytes_per_sec": 0, 00:13:54.757 "r_mbytes_per_sec": 0, 00:13:54.757 "w_mbytes_per_sec": 0 00:13:54.757 }, 00:13:54.757 "claimed": false, 00:13:54.757 "zoned": false, 00:13:54.757 "supported_io_types": { 00:13:54.757 "read": true, 00:13:54.757 "write": true, 00:13:54.757 "unmap": true, 00:13:54.757 "write_zeroes": true, 00:13:54.757 "flush": true, 00:13:54.757 "reset": false, 00:13:54.757 "compare": false, 00:13:54.757 "compare_and_write": false, 00:13:54.757 "abort": false, 00:13:54.757 "nvme_admin": false, 00:13:54.758 "nvme_io": false 00:13:54.758 }, 00:13:54.758 "driver_specific": { 00:13:54.758 "ftl": { 00:13:54.758 "base_bdev": "11f6955a-cc4e-4d75-b970-b85770258c3b", 00:13:54.758 "cache": "nvc0n1p0" 00:13:54.758 } 00:13:54.758 } 00:13:54.758 } 00:13:54.758 ] 00:13:54.758 14:14:56 -- common/autotest_common.sh@905 -- # return 0 00:13:54.758 14:14:56 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:13:54.758 14:14:56 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:13:55.015 14:14:56 -- ftl/fio.sh@70 -- # echo ']}' 00:13:55.015 14:14:56 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:13:55.275 [2024-12-04 14:14:56.485929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.275 [2024-12-04 14:14:56.486073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:13:55.275 [2024-12-04 14:14:56.486107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:13:55.275 [2024-12-04 14:14:56.486117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.275 [2024-12-04 14:14:56.486149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:13:55.275 [2024-12-04 14:14:56.488662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.275 [2024-12-04 14:14:56.488691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:13:55.275 [2024-12-04 14:14:56.488705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:13:55.275 [2024-12-04 14:14:56.488713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.275 [2024-12-04 14:14:56.489132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.275 [2024-12-04 14:14:56.489147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:13:55.275 [2024-12-04 14:14:56.489157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:13:55.275 [2024-12-04 14:14:56.489164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.275 [2024-12-04 14:14:56.492404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.275 [2024-12-04 14:14:56.492424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:13:55.275 [2024-12-04 14:14:56.492435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:13:55.275 [2024-12-04 14:14:56.492443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.275 [2024-12-04 14:14:56.498678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.275 [2024-12-04 14:14:56.498703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:13:55.275 [2024-12-04 14:14:56.498714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.205 ms 00:13:55.275 [2024-12-04 14:14:56.498721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.275 [2024-12-04 14:14:56.522496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.275 [2024-12-04 14:14:56.522530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:13:55.275 [2024-12-04 14:14:56.522543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.685 ms 00:13:55.276 [2024-12-04 14:14:56.522550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.537470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.276 [2024-12-04 14:14:56.537503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:13:55.276 [2024-12-04 14:14:56.537528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.876 ms 00:13:55.276 [2024-12-04 14:14:56.537536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.537717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.276 [2024-12-04 14:14:56.537728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:13:55.276 [2024-12-04 14:14:56.537740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:13:55.276 [2024-12-04 14:14:56.537747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.557526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.276 [2024-12-04 14:14:56.557551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:13:55.276 [2024-12-04 14:14:56.557560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.748 ms 00:13:55.276 [2024-12-04 14:14:56.557565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.575317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.276 [2024-12-04 14:14:56.575417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:13:55.276 [2024-12-04 14:14:56.575432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.716 ms 00:13:55.276 [2024-12-04 14:14:56.575437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.592493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.276 [2024-12-04 14:14:56.592519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:13:55.276 [2024-12-04 14:14:56.592529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.021 ms 00:13:55.276 [2024-12-04 14:14:56.592534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.610027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.276 [2024-12-04 14:14:56.610057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:13:55.276 [2024-12-04 14:14:56.610067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.415 ms 00:13:55.276 [2024-12-04 14:14:56.610072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.276 [2024-12-04 14:14:56.610119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:13:55.276 [2024-12-04 14:14:56.610130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:13:55.276 [2024-12-04 14:14:56.610492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:13:55.277 [2024-12-04 14:14:56.610814] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:13:55.277 [2024-12-04 14:14:56.610821] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f4ab0eb-8883-41c9-abcb-0989134a40cf 00:13:55.277 [2024-12-04 14:14:56.610827] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:13:55.277 [2024-12-04 14:14:56.610834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:13:55.277 [2024-12-04 14:14:56.610839] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:13:55.277 [2024-12-04 14:14:56.610846] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:13:55.277 [2024-12-04 14:14:56.610851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:13:55.277 [2024-12-04 14:14:56.610858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:13:55.277 [2024-12-04 14:14:56.610864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:13:55.277 [2024-12-04 14:14:56.610869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:13:55.277 [2024-12-04 14:14:56.610874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:13:55.277 [2024-12-04 14:14:56.610882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.277 [2024-12-04 14:14:56.610889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:13:55.277 [2024-12-04 14:14:56.610897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:13:55.277 [2024-12-04 14:14:56.610902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.277 [2024-12-04 14:14:56.620152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.277 [2024-12-04 14:14:56.620176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:13:55.277 [2024-12-04 14:14:56.620185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.216 ms 00:13:55.277 [2024-12-04 14:14:56.620191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.277 [2024-12-04 14:14:56.620344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:13:55.277 [2024-12-04 14:14:56.620350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:13:55.277 [2024-12-04 14:14:56.620358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:13:55.277 [2024-12-04 14:14:56.620363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.277 [2024-12-04 14:14:56.655114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.277 [2024-12-04 14:14:56.655143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:13:55.277 [2024-12-04 14:14:56.655153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.277 [2024-12-04 14:14:56.655159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.277 [2024-12-04 14:14:56.655205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.277 [2024-12-04 14:14:56.655211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:13:55.277 [2024-12-04 14:14:56.655218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.278 [2024-12-04 14:14:56.655223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.278 [2024-12-04 14:14:56.655286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.278 [2024-12-04 14:14:56.655294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:13:55.278 [2024-12-04 14:14:56.655301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.278 [2024-12-04 14:14:56.655307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.278 [2024-12-04 14:14:56.655326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.278 [2024-12-04 14:14:56.655333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:13:55.278 [2024-12-04 14:14:56.655340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.278 [2024-12-04 14:14:56.655346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.278 [2024-12-04 14:14:56.720027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.278 [2024-12-04 14:14:56.720066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:13:55.278 [2024-12-04 14:14:56.720077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.278 [2024-12-04 14:14:56.720083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:13:55.536 [2024-12-04 14:14:56.742176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:13:55.536 [2024-12-04 14:14:56.742256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:13:55.536 [2024-12-04 14:14:56.742322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:13:55.536 [2024-12-04 14:14:56.742429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:13:55.536 [2024-12-04 14:14:56.742486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:13:55.536 [2024-12-04 14:14:56.742547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:13:55.536 [2024-12-04 14:14:56.742602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:13:55.536 [2024-12-04 14:14:56.742611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:13:55.536 [2024-12-04 14:14:56.742616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:13:55.536 [2024-12-04 14:14:56.742744] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 256.795 ms, result 0 00:13:55.536 true 00:13:55.536 14:14:56 -- ftl/fio.sh@75 -- # killprocess 70429 00:13:55.536 14:14:56 -- common/autotest_common.sh@936 -- # '[' -z 70429 ']' 00:13:55.536 14:14:56 -- common/autotest_common.sh@940 -- # kill -0 70429 00:13:55.536 14:14:56 -- common/autotest_common.sh@941 -- # uname 00:13:55.536 14:14:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.536 14:14:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70429 00:13:55.536 killing process with pid 70429 00:13:55.536 14:14:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:55.536 14:14:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:55.536 14:14:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70429' 00:13:55.536 14:14:56 -- common/autotest_common.sh@955 -- # kill 70429 00:13:55.536 14:14:56 -- common/autotest_common.sh@960 -- # wait 70429 00:13:59.722 14:15:00 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:13:59.722 14:15:00 -- ftl/fio.sh@78 -- # for test in ${tests} 00:13:59.722 14:15:00 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:13:59.722 14:15:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:59.722 14:15:00 -- common/autotest_common.sh@10 -- # set +x 00:13:59.722 14:15:00 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:13:59.722 14:15:00 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:13:59.722 14:15:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:59.722 14:15:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:59.722 14:15:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:59.722 14:15:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.722 14:15:00 -- common/autotest_common.sh@1330 -- # shift 00:13:59.722 14:15:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:59.722 14:15:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:59.722 14:15:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:59.722 14:15:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.722 14:15:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:59.722 14:15:00 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:59.722 14:15:00 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:59.722 14:15:00 -- common/autotest_common.sh@1336 -- # break 00:13:59.722 14:15:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:59.722 14:15:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:13:59.722 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:13:59.722 fio-3.35 00:13:59.722 Starting 1 thread 00:14:05.002 00:14:05.002 test: (groupid=0, jobs=1): err= 0: pid=70633: Wed Dec 4 14:15:05 2024 00:14:05.002 read: IOPS=1107, BW=73.6MiB/s (77.1MB/s)(255MiB/3460msec) 00:14:05.002 slat (nsec): min=2924, max=28841, avg=4007.99, stdev=1711.81 00:14:05.002 clat (usec): min=243, max=1439, avg=408.32, stdev=174.94 00:14:05.002 lat (usec): min=247, max=1450, avg=412.32, stdev=175.54 00:14:05.002 clat percentiles (usec): 00:14:05.002 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 306], 20.00th=[ 310], 00:14:05.002 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:14:05.002 | 70.00th=[ 408], 80.00th=[ 502], 90.00th=[ 750], 95.00th=[ 832], 00:14:05.002 | 99.00th=[ 971], 99.50th=[ 1037], 99.90th=[ 1254], 99.95th=[ 1369], 00:14:05.002 | 99.99th=[ 1434] 00:14:05.002 write: IOPS=1115, BW=74.1MiB/s (77.7MB/s)(256MiB/3457msec); 0 zone resets 00:14:05.002 slat (usec): min=13, max=281, avg=17.30, stdev= 5.26 00:14:05.002 clat (usec): min=281, max=2111, avg=457.63, stdev=231.11 00:14:05.002 lat (usec): min=295, max=2129, avg=474.93, stdev=232.38 00:14:05.002 clat percentiles (usec): 00:14:05.002 | 1.00th=[ 297], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 334], 00:14:05.002 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 355], 00:14:05.002 | 70.00th=[ 461], 80.00th=[ 553], 90.00th=[ 840], 95.00th=[ 914], 00:14:05.002 | 99.00th=[ 1516], 99.50th=[ 1647], 99.90th=[ 1991], 99.95th=[ 2024], 00:14:05.002 | 99.99th=[ 2114] 00:14:05.002 bw ( KiB/s): min=43928, max=98600, per=95.66%, avg=72556.00, stdev=25458.06, samples=6 00:14:05.002 iops : min= 646, max= 1450, avg=1067.00, stdev=374.38, samples=6 00:14:05.002 lat (usec) : 250=0.03%, 500=75.73%, 750=13.42%, 1000=9.18% 00:14:05.002 lat (msec) : 2=1.60%, 4=0.04% 00:14:05.002 cpu : usr=99.48%, sys=0.00%, ctx=4, majf=0, minf=1318 00:14:05.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:05.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.002 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:05.002 00:14:05.002 Run status group 0 (all jobs): 00:14:05.002 READ: bw=73.6MiB/s (77.1MB/s), 73.6MiB/s-73.6MiB/s (77.1MB/s-77.1MB/s), io=255MiB (267MB), run=3460-3460msec 00:14:05.002 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=256MiB (269MB), run=3457-3457msec 00:14:05.262 ----------------------------------------------------- 00:14:05.262 Suppressions used: 00:14:05.262 count bytes template 00:14:05.262 1 5 /usr/src/fio/parse.c 00:14:05.262 1 8 libtcmalloc_minimal.so 00:14:05.262 1 904 libcrypto.so 00:14:05.262 ----------------------------------------------------- 00:14:05.262 00:14:05.262 14:15:06 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:05.262 14:15:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.262 14:15:06 -- common/autotest_common.sh@10 -- # set +x 00:14:05.522 14:15:06 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:05.522 14:15:06 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:05.522 14:15:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.522 14:15:06 -- common/autotest_common.sh@10 -- # set +x 00:14:05.522 14:15:06 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:05.522 14:15:06 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:05.522 14:15:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:05.522 14:15:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.522 14:15:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:05.522 14:15:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.522 14:15:06 -- common/autotest_common.sh@1330 -- # shift 00:14:05.522 14:15:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:05.522 14:15:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.522 14:15:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:05.522 14:15:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:05.522 14:15:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:05.522 14:15:06 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:05.522 14:15:06 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:05.522 14:15:06 -- common/autotest_common.sh@1336 -- # break 00:14:05.522 14:15:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:05.522 14:15:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:05.522 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:05.522 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:05.522 fio-3.35 00:14:05.522 Starting 2 threads 00:14:32.092 00:14:32.092 first_half: (groupid=0, jobs=1): err= 0: pid=70729: Wed Dec 4 14:15:28 2024 00:14:32.092 read: IOPS=3107, BW=12.1MiB/s (12.7MB/s)(255MiB/20994msec) 00:14:32.092 slat (nsec): min=2959, max=18934, avg=3680.69, stdev=619.45 00:14:32.092 clat (usec): min=559, max=421838, avg=31199.81, stdev=18734.86 00:14:32.092 lat (usec): min=562, max=421842, avg=31203.49, stdev=18734.92 00:14:32.092 clat percentiles (msec): 00:14:32.092 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 28], 00:14:32.092 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:14:32.092 | 70.00th=[ 29], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 42], 00:14:32.092 | 99.00th=[ 121], 99.50th=[ 138], 99.90th=[ 279], 99.95th=[ 368], 00:14:32.092 | 99.99th=[ 414] 00:14:32.092 write: IOPS=3651, BW=14.3MiB/s (15.0MB/s)(256MiB/17946msec); 0 zone resets 00:14:32.092 slat (usec): min=3, max=2822, avg= 5.43, stdev=19.03 00:14:32.092 clat (usec): min=312, max=75635, avg=9897.33, stdev=15713.72 00:14:32.092 lat (usec): min=319, max=75640, avg=9902.77, stdev=15713.78 00:14:32.092 clat percentiles (usec): 00:14:32.092 | 1.00th=[ 603], 5.00th=[ 676], 10.00th=[ 758], 20.00th=[ 1123], 00:14:32.092 | 30.00th=[ 2671], 40.00th=[ 3982], 50.00th=[ 4621], 60.00th=[ 5080], 00:14:32.092 | 70.00th=[ 5866], 80.00th=[10028], 90.00th=[28443], 95.00th=[56361], 00:14:32.092 | 99.00th=[62653], 99.50th=[65274], 99.90th=[68682], 99.95th=[69731], 00:14:32.092 | 99.99th=[74974] 00:14:32.092 bw ( KiB/s): min= 1008, max=41528, per=78.03%, avg=22795.13, stdev=11737.17, samples=23 00:14:32.092 iops : min= 252, max=10382, avg=5698.78, stdev=2934.29, samples=23 00:14:32.092 lat (usec) : 500=0.03%, 750=4.75%, 1000=4.04% 00:14:32.092 lat (msec) : 2=3.69%, 4=7.93%, 10=21.79%, 20=4.65%, 50=47.18% 00:14:32.092 lat (msec) : 100=5.14%, 250=0.74%, 500=0.06% 00:14:32.092 cpu : usr=99.44%, sys=0.18%, ctx=40, majf=0, minf=5565 00:14:32.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:32.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.092 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.092 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.092 second_half: (groupid=0, jobs=1): err= 0: pid=70730: Wed Dec 4 14:15:28 2024 00:14:32.092 read: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(254MiB/20837msec) 00:14:32.092 slat (nsec): min=2950, max=19910, avg=3613.03, stdev=574.61 00:14:32.092 clat (usec): min=548, max=357487, avg=32067.62, stdev=15736.31 00:14:32.092 lat (usec): min=551, max=357492, avg=32071.24, stdev=15736.36 00:14:32.092 clat percentiles (msec): 00:14:32.092 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:14:32.092 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:14:32.092 | 70.00th=[ 30], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 45], 00:14:32.092 | 99.00th=[ 118], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 213], 00:14:32.092 | 99.99th=[ 321] 00:14:32.092 write: IOPS=4851, BW=19.0MiB/s (19.9MB/s)(256MiB/13508msec); 0 zone resets 00:14:32.092 slat (usec): min=3, max=2855, avg= 5.22, stdev=11.47 00:14:32.092 clat (usec): min=316, max=75540, avg=8808.12, stdev=15377.64 00:14:32.092 lat (usec): min=324, max=75545, avg=8813.34, stdev=15377.67 00:14:32.093 clat percentiles (usec): 00:14:32.093 | 1.00th=[ 611], 5.00th=[ 693], 10.00th=[ 766], 20.00th=[ 963], 00:14:32.093 | 30.00th=[ 1237], 40.00th=[ 2606], 50.00th=[ 3556], 60.00th=[ 4490], 00:14:32.093 | 70.00th=[ 5342], 80.00th=[ 9896], 90.00th=[16712], 95.00th=[56361], 00:14:32.093 | 99.00th=[62653], 99.50th=[65274], 99.90th=[69731], 99.95th=[73925], 00:14:32.093 | 99.99th=[74974] 00:14:32.093 bw ( KiB/s): min= 824, max=51128, per=100.00%, avg=32768.00, stdev=13544.03, samples=16 00:14:32.093 iops : min= 206, max=12782, avg=8192.00, stdev=3386.01, samples=16 00:14:32.093 lat (usec) : 500=0.05%, 750=4.55%, 1000=6.55% 00:14:32.093 lat (msec) : 2=6.39%, 4=10.71%, 10=12.93%, 20=5.57%, 50=47.03% 00:14:32.093 lat (msec) : 100=5.44%, 250=0.76%, 500=0.02% 00:14:32.093 cpu : usr=99.51%, sys=0.13%, ctx=35, majf=0, minf=5552 00:14:32.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:32.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.093 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.093 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.093 00:14:32.093 Run status group 0 (all jobs): 00:14:32.093 READ: bw=24.3MiB/s (25.4MB/s), 12.1MiB/s-12.2MiB/s (12.7MB/s-12.8MB/s), io=509MiB (534MB), run=20837-20994msec 00:14:32.093 WRITE: bw=28.5MiB/s (29.9MB/s), 14.3MiB/s-19.0MiB/s (15.0MB/s-19.9MB/s), io=512MiB (537MB), run=13508-17946msec 00:14:32.093 ----------------------------------------------------- 00:14:32.093 Suppressions used: 00:14:32.093 count bytes template 00:14:32.093 2 10 /usr/src/fio/parse.c 00:14:32.093 2 192 /usr/src/fio/iolog.c 00:14:32.093 1 8 libtcmalloc_minimal.so 00:14:32.093 1 904 libcrypto.so 00:14:32.093 ----------------------------------------------------- 00:14:32.093 00:14:32.093 14:15:31 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:14:32.093 14:15:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.093 14:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:32.093 14:15:31 -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:32.093 14:15:31 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:14:32.093 14:15:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.093 14:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:32.093 14:15:31 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:32.093 14:15:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:32.093 14:15:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:32.093 14:15:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:32.093 14:15:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:32.093 14:15:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.093 14:15:31 -- common/autotest_common.sh@1330 -- # shift 00:14:32.093 14:15:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:32.093 14:15:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:32.093 14:15:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:32.093 14:15:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:32.093 14:15:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:32.093 14:15:31 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:32.093 14:15:31 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:32.093 14:15:31 -- common/autotest_common.sh@1336 -- # break 00:14:32.093 14:15:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:32.093 14:15:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:32.093 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:32.093 fio-3.35 00:14:32.093 Starting 1 thread 00:14:44.350 00:14:44.350 test: (groupid=0, jobs=1): err= 0: pid=71005: Wed Dec 4 14:15:44 2024 00:14:44.350 read: IOPS=8122, BW=31.7MiB/s (33.3MB/s)(255MiB/8027msec) 00:14:44.350 slat (nsec): min=2999, max=25212, avg=3421.44, stdev=551.44 00:14:44.350 clat (usec): min=470, max=37221, avg=15749.74, stdev=2318.33 00:14:44.350 lat (usec): min=474, max=37224, avg=15753.16, stdev=2318.36 00:14:44.350 clat percentiles (usec): 00:14:44.350 | 1.00th=[13829], 5.00th=[13960], 10.00th=[14091], 20.00th=[14222], 00:14:44.350 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:14:44.350 | 70.00th=[15533], 80.00th=[17433], 90.00th=[19530], 95.00th=[20841], 00:14:44.350 | 99.00th=[22676], 99.50th=[23462], 99.90th=[25822], 99.95th=[33162], 00:14:44.350 | 99.99th=[36439] 00:14:44.350 write: IOPS=14.9k, BW=58.2MiB/s (61.0MB/s)(256MiB/4400msec); 0 zone resets 00:14:44.350 slat (usec): min=3, max=116, avg= 5.27, stdev= 1.91 00:14:44.350 clat (usec): min=453, max=42963, avg=8555.56, stdev=9405.24 00:14:44.350 lat (usec): min=458, max=42969, avg=8560.83, stdev=9405.29 00:14:44.350 clat percentiles (usec): 00:14:44.350 | 1.00th=[ 586], 5.00th=[ 660], 10.00th=[ 742], 20.00th=[ 889], 00:14:44.350 | 30.00th=[ 1012], 40.00th=[ 1385], 50.00th=[ 5080], 60.00th=[ 6587], 00:14:44.350 | 70.00th=[11076], 80.00th=[15270], 90.00th=[26608], 95.00th=[27919], 00:14:44.350 | 99.00th=[30540], 99.50th=[32900], 99.90th=[39584], 99.95th=[40109], 00:14:44.350 | 99.99th=[40633] 00:14:44.350 bw ( KiB/s): min=34091, max=91280, per=97.76%, avg=58246.56, stdev=15997.47, samples=9 00:14:44.350 iops : min= 8522, max=22820, avg=14561.56, stdev=3999.51, samples=9 00:14:44.350 lat (usec) : 500=0.01%, 750=5.16%, 1000=9.45% 00:14:44.350 lat (msec) : 2=6.01%, 4=0.61%, 10=12.57%, 20=54.45%, 50=11.75% 00:14:44.350 cpu : usr=99.44%, sys=0.14%, ctx=22, majf=0, minf=5567 00:14:44.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:44.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.350 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.350 00:14:44.350 Run status group 0 (all jobs): 00:14:44.350 READ: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=255MiB (267MB), run=8027-8027msec 00:14:44.350 WRITE: bw=58.2MiB/s (61.0MB/s), 58.2MiB/s-58.2MiB/s (61.0MB/s-61.0MB/s), io=256MiB (268MB), run=4400-4400msec 00:14:44.922 ----------------------------------------------------- 00:14:44.922 Suppressions used: 00:14:44.922 count bytes template 00:14:44.922 1 5 /usr/src/fio/parse.c 00:14:44.922 2 192 /usr/src/fio/iolog.c 00:14:44.922 1 8 libtcmalloc_minimal.so 00:14:44.922 1 904 libcrypto.so 00:14:44.922 ----------------------------------------------------- 00:14:44.922 00:14:44.922 14:15:46 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:14:44.922 14:15:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.922 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:14:44.922 14:15:46 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:44.922 Remove shared memory files 00:14:44.922 14:15:46 -- ftl/fio.sh@85 -- # remove_shm 00:14:44.922 14:15:46 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:14:44.922 14:15:46 -- ftl/common.sh@205 -- # rm -f rm -f 00:14:44.922 14:15:46 -- ftl/common.sh@206 -- # rm -f rm -f 00:14:44.922 14:15:46 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56171 /dev/shm/spdk_tgt_trace.pid69329 00:14:44.922 14:15:46 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:14:44.922 14:15:46 -- ftl/common.sh@209 -- # rm -f rm -f 00:14:44.922 ************************************ 00:14:44.922 END TEST ftl_fio_basic 00:14:44.922 ************************************ 00:14:44.922 00:14:44.922 real 0m57.439s 00:14:44.922 user 2m2.626s 00:14:44.922 sys 0m2.538s 00:14:44.922 14:15:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:44.922 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.184 14:15:46 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:14:45.184 14:15:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:45.184 14:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.184 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.184 ************************************ 00:14:45.184 START TEST ftl_bdevperf 00:14:45.184 ************************************ 00:14:45.184 14:15:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:14:45.184 * Looking for test storage... 00:14:45.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:45.184 14:15:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:45.184 14:15:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:45.184 14:15:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:45.184 14:15:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:45.184 14:15:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:45.184 14:15:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:45.184 14:15:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:45.184 14:15:46 -- scripts/common.sh@335 -- # IFS=.-: 00:14:45.184 14:15:46 -- scripts/common.sh@335 -- # read -ra ver1 00:14:45.184 14:15:46 -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.184 14:15:46 -- scripts/common.sh@336 -- # read -ra ver2 00:14:45.184 14:15:46 -- scripts/common.sh@337 -- # local 'op=<' 00:14:45.184 14:15:46 -- scripts/common.sh@339 -- # ver1_l=2 00:14:45.184 14:15:46 -- scripts/common.sh@340 -- # ver2_l=1 00:14:45.184 14:15:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:45.184 14:15:46 -- scripts/common.sh@343 -- # case "$op" in 00:14:45.184 14:15:46 -- scripts/common.sh@344 -- # : 1 00:14:45.184 14:15:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:45.184 14:15:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.184 14:15:46 -- scripts/common.sh@364 -- # decimal 1 00:14:45.184 14:15:46 -- scripts/common.sh@352 -- # local d=1 00:14:45.184 14:15:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.184 14:15:46 -- scripts/common.sh@354 -- # echo 1 00:14:45.184 14:15:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:45.184 14:15:46 -- scripts/common.sh@365 -- # decimal 2 00:14:45.184 14:15:46 -- scripts/common.sh@352 -- # local d=2 00:14:45.184 14:15:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.184 14:15:46 -- scripts/common.sh@354 -- # echo 2 00:14:45.184 14:15:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:45.184 14:15:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:45.184 14:15:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:45.184 14:15:46 -- scripts/common.sh@367 -- # return 0 00:14:45.184 14:15:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.184 14:15:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.184 --rc genhtml_branch_coverage=1 00:14:45.184 --rc genhtml_function_coverage=1 00:14:45.184 --rc genhtml_legend=1 00:14:45.184 --rc geninfo_all_blocks=1 00:14:45.184 --rc geninfo_unexecuted_blocks=1 00:14:45.184 00:14:45.184 ' 00:14:45.184 14:15:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.184 --rc genhtml_branch_coverage=1 00:14:45.184 --rc genhtml_function_coverage=1 00:14:45.184 --rc genhtml_legend=1 00:14:45.184 --rc geninfo_all_blocks=1 00:14:45.184 --rc geninfo_unexecuted_blocks=1 00:14:45.184 00:14:45.184 ' 00:14:45.184 14:15:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.184 --rc genhtml_branch_coverage=1 00:14:45.184 --rc genhtml_function_coverage=1 00:14:45.184 --rc genhtml_legend=1 00:14:45.184 --rc geninfo_all_blocks=1 00:14:45.184 --rc geninfo_unexecuted_blocks=1 00:14:45.184 00:14:45.184 ' 00:14:45.184 14:15:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:45.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.184 --rc genhtml_branch_coverage=1 00:14:45.184 --rc genhtml_function_coverage=1 00:14:45.184 --rc genhtml_legend=1 00:14:45.184 --rc geninfo_all_blocks=1 00:14:45.184 --rc geninfo_unexecuted_blocks=1 00:14:45.184 00:14:45.184 ' 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:45.184 14:15:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:14:45.184 14:15:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:45.184 14:15:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:45.184 14:15:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:45.184 14:15:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:45.184 14:15:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.184 14:15:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:45.184 14:15:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:45.184 14:15:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.184 14:15:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.184 14:15:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:45.184 14:15:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:45.184 14:15:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:45.184 14:15:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:45.184 14:15:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:45.184 14:15:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:45.184 14:15:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.184 14:15:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:45.184 14:15:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:45.184 14:15:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:45.184 14:15:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:45.184 14:15:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:45.184 14:15:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:45.184 14:15:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:45.184 14:15:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:45.184 14:15:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:45.184 14:15:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.184 14:15:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@13 -- # use_append= 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@15 -- # timeout=240 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:14:45.184 14:15:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.184 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71239 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:14:45.184 14:15:46 -- ftl/bdevperf.sh@22 -- # waitforlisten 71239 00:14:45.185 14:15:46 -- common/autotest_common.sh@829 -- # '[' -z 71239 ']' 00:14:45.185 14:15:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.185 14:15:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.185 14:15:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.185 14:15:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.185 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:14:45.185 [2024-12-04 14:15:46.624740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:45.185 [2024-12-04 14:15:46.625433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71239 ] 00:14:45.446 [2024-12-04 14:15:46.773783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.706 [2024-12-04 14:15:46.951622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.279 14:15:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.279 14:15:47 -- common/autotest_common.sh@862 -- # return 0 00:14:46.279 14:15:47 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:14:46.279 14:15:47 -- ftl/common.sh@54 -- # local name=nvme0 00:14:46.279 14:15:47 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:14:46.279 14:15:47 -- ftl/common.sh@56 -- # local size=103424 00:14:46.279 14:15:47 -- ftl/common.sh@59 -- # local base_bdev 00:14:46.279 14:15:47 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:14:46.279 14:15:47 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:46.279 14:15:47 -- ftl/common.sh@62 -- # local base_size 00:14:46.279 14:15:47 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:46.279 14:15:47 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:14:46.279 14:15:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:46.279 14:15:47 -- common/autotest_common.sh@1369 -- # local bs 00:14:46.279 14:15:47 -- common/autotest_common.sh@1370 -- # local nb 00:14:46.279 14:15:47 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:46.540 14:15:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:46.540 { 00:14:46.540 "name": "nvme0n1", 00:14:46.540 "aliases": [ 00:14:46.540 "20a22785-7739-45c4-b6f7-be102fd5b1fe" 00:14:46.540 ], 00:14:46.540 "product_name": "NVMe disk", 00:14:46.540 "block_size": 4096, 00:14:46.540 "num_blocks": 1310720, 00:14:46.540 "uuid": "20a22785-7739-45c4-b6f7-be102fd5b1fe", 00:14:46.540 "assigned_rate_limits": { 00:14:46.540 "rw_ios_per_sec": 0, 00:14:46.540 "rw_mbytes_per_sec": 0, 00:14:46.540 "r_mbytes_per_sec": 0, 00:14:46.540 "w_mbytes_per_sec": 0 00:14:46.540 }, 00:14:46.540 "claimed": true, 00:14:46.540 "claim_type": "read_many_write_one", 00:14:46.540 "zoned": false, 00:14:46.540 "supported_io_types": { 00:14:46.540 "read": true, 00:14:46.540 "write": true, 00:14:46.540 "unmap": true, 00:14:46.540 "write_zeroes": true, 00:14:46.540 "flush": true, 00:14:46.540 "reset": true, 00:14:46.540 "compare": true, 00:14:46.540 "compare_and_write": false, 00:14:46.540 "abort": true, 00:14:46.540 "nvme_admin": true, 00:14:46.540 "nvme_io": true 00:14:46.540 }, 00:14:46.540 "driver_specific": { 00:14:46.540 "nvme": [ 00:14:46.540 { 00:14:46.540 "pci_address": "0000:00:07.0", 00:14:46.540 "trid": { 00:14:46.540 "trtype": "PCIe", 00:14:46.540 "traddr": "0000:00:07.0" 00:14:46.540 }, 00:14:46.540 "ctrlr_data": { 00:14:46.540 "cntlid": 0, 00:14:46.540 "vendor_id": "0x1b36", 00:14:46.540 "model_number": "QEMU NVMe Ctrl", 00:14:46.540 "serial_number": "12341", 00:14:46.540 "firmware_revision": "8.0.0", 00:14:46.540 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:46.540 "oacs": { 00:14:46.540 "security": 0, 00:14:46.540 "format": 1, 00:14:46.540 "firmware": 0, 00:14:46.540 "ns_manage": 1 00:14:46.540 }, 00:14:46.540 "multi_ctrlr": false, 00:14:46.540 "ana_reporting": false 00:14:46.540 }, 00:14:46.540 "vs": { 00:14:46.540 "nvme_version": "1.4" 00:14:46.540 }, 00:14:46.541 "ns_data": { 00:14:46.541 "id": 1, 00:14:46.541 "can_share": false 00:14:46.541 } 00:14:46.541 } 00:14:46.541 ], 00:14:46.541 "mp_policy": "active_passive" 00:14:46.541 } 00:14:46.541 } 00:14:46.541 ]' 00:14:46.541 14:15:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:46.541 14:15:47 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:46.541 14:15:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:46.541 14:15:47 -- common/autotest_common.sh@1373 -- # nb=1310720 00:14:46.541 14:15:47 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:14:46.541 14:15:47 -- common/autotest_common.sh@1377 -- # echo 5120 00:14:46.541 14:15:47 -- ftl/common.sh@63 -- # base_size=5120 00:14:46.541 14:15:47 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:46.541 14:15:47 -- ftl/common.sh@67 -- # clear_lvols 00:14:46.541 14:15:47 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:46.541 14:15:47 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:46.799 14:15:48 -- ftl/common.sh@28 -- # stores=eb6e116b-3d33-45c9-80d0-0402ab96dc7c 00:14:46.799 14:15:48 -- ftl/common.sh@29 -- # for lvs in $stores 00:14:46.799 14:15:48 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb6e116b-3d33-45c9-80d0-0402ab96dc7c 00:14:47.056 14:15:48 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:47.313 14:15:48 -- ftl/common.sh@68 -- # lvs=783d806b-9824-4329-a4b9-f39dc49d9f64 00:14:47.313 14:15:48 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 783d806b-9824-4329-a4b9-f39dc49d9f64 00:14:47.313 14:15:48 -- ftl/bdevperf.sh@23 -- # split_bdev=de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.313 14:15:48 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.313 14:15:48 -- ftl/common.sh@35 -- # local name=nvc0 00:14:47.313 14:15:48 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:14:47.313 14:15:48 -- ftl/common.sh@37 -- # local base_bdev=de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.313 14:15:48 -- ftl/common.sh@38 -- # local cache_size= 00:14:47.313 14:15:48 -- ftl/common.sh@41 -- # get_bdev_size de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.313 14:15:48 -- common/autotest_common.sh@1367 -- # local bdev_name=de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.313 14:15:48 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:47.313 14:15:48 -- common/autotest_common.sh@1369 -- # local bs 00:14:47.313 14:15:48 -- common/autotest_common.sh@1370 -- # local nb 00:14:47.313 14:15:48 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.571 14:15:48 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:47.571 { 00:14:47.571 "name": "de2908b3-219d-4fab-887b-7e30cbd32562", 00:14:47.571 "aliases": [ 00:14:47.571 "lvs/nvme0n1p0" 00:14:47.571 ], 00:14:47.571 "product_name": "Logical Volume", 00:14:47.571 "block_size": 4096, 00:14:47.571 "num_blocks": 26476544, 00:14:47.571 "uuid": "de2908b3-219d-4fab-887b-7e30cbd32562", 00:14:47.571 "assigned_rate_limits": { 00:14:47.571 "rw_ios_per_sec": 0, 00:14:47.571 "rw_mbytes_per_sec": 0, 00:14:47.571 "r_mbytes_per_sec": 0, 00:14:47.571 "w_mbytes_per_sec": 0 00:14:47.571 }, 00:14:47.571 "claimed": false, 00:14:47.571 "zoned": false, 00:14:47.571 "supported_io_types": { 00:14:47.571 "read": true, 00:14:47.571 "write": true, 00:14:47.571 "unmap": true, 00:14:47.571 "write_zeroes": true, 00:14:47.571 "flush": false, 00:14:47.571 "reset": true, 00:14:47.571 "compare": false, 00:14:47.571 "compare_and_write": false, 00:14:47.571 "abort": false, 00:14:47.571 "nvme_admin": false, 00:14:47.571 "nvme_io": false 00:14:47.571 }, 00:14:47.571 "driver_specific": { 00:14:47.571 "lvol": { 00:14:47.571 "lvol_store_uuid": "783d806b-9824-4329-a4b9-f39dc49d9f64", 00:14:47.571 "base_bdev": "nvme0n1", 00:14:47.571 "thin_provision": true, 00:14:47.571 "snapshot": false, 00:14:47.571 "clone": false, 00:14:47.571 "esnap_clone": false 00:14:47.571 } 00:14:47.571 } 00:14:47.571 } 00:14:47.571 ]' 00:14:47.571 14:15:48 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:47.571 14:15:48 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:47.571 14:15:48 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:47.571 14:15:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:47.571 14:15:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:47.571 14:15:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:47.571 14:15:49 -- ftl/common.sh@41 -- # local base_size=5171 00:14:47.571 14:15:49 -- ftl/common.sh@44 -- # local nvc_bdev 00:14:47.571 14:15:49 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:14:47.828 14:15:49 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:47.828 14:15:49 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:47.828 14:15:49 -- ftl/common.sh@48 -- # get_bdev_size de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.828 14:15:49 -- common/autotest_common.sh@1367 -- # local bdev_name=de2908b3-219d-4fab-887b-7e30cbd32562 00:14:47.828 14:15:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:47.828 14:15:49 -- common/autotest_common.sh@1369 -- # local bs 00:14:47.828 14:15:49 -- common/autotest_common.sh@1370 -- # local nb 00:14:47.828 14:15:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de2908b3-219d-4fab-887b-7e30cbd32562 00:14:48.085 14:15:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:48.085 { 00:14:48.085 "name": "de2908b3-219d-4fab-887b-7e30cbd32562", 00:14:48.085 "aliases": [ 00:14:48.085 "lvs/nvme0n1p0" 00:14:48.085 ], 00:14:48.085 "product_name": "Logical Volume", 00:14:48.086 "block_size": 4096, 00:14:48.086 "num_blocks": 26476544, 00:14:48.086 "uuid": "de2908b3-219d-4fab-887b-7e30cbd32562", 00:14:48.086 "assigned_rate_limits": { 00:14:48.086 "rw_ios_per_sec": 0, 00:14:48.086 "rw_mbytes_per_sec": 0, 00:14:48.086 "r_mbytes_per_sec": 0, 00:14:48.086 "w_mbytes_per_sec": 0 00:14:48.086 }, 00:14:48.086 "claimed": false, 00:14:48.086 "zoned": false, 00:14:48.086 "supported_io_types": { 00:14:48.086 "read": true, 00:14:48.086 "write": true, 00:14:48.086 "unmap": true, 00:14:48.086 "write_zeroes": true, 00:14:48.086 "flush": false, 00:14:48.086 "reset": true, 00:14:48.086 "compare": false, 00:14:48.086 "compare_and_write": false, 00:14:48.086 "abort": false, 00:14:48.086 "nvme_admin": false, 00:14:48.086 "nvme_io": false 00:14:48.086 }, 00:14:48.086 "driver_specific": { 00:14:48.086 "lvol": { 00:14:48.086 "lvol_store_uuid": "783d806b-9824-4329-a4b9-f39dc49d9f64", 00:14:48.086 "base_bdev": "nvme0n1", 00:14:48.086 "thin_provision": true, 00:14:48.086 "snapshot": false, 00:14:48.086 "clone": false, 00:14:48.086 "esnap_clone": false 00:14:48.086 } 00:14:48.086 } 00:14:48.086 } 00:14:48.086 ]' 00:14:48.086 14:15:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:48.086 14:15:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:48.086 14:15:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:48.086 14:15:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:48.086 14:15:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:48.086 14:15:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:48.086 14:15:49 -- ftl/common.sh@48 -- # cache_size=5171 00:14:48.086 14:15:49 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:48.343 14:15:49 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:14:48.343 14:15:49 -- ftl/bdevperf.sh@26 -- # get_bdev_size de2908b3-219d-4fab-887b-7e30cbd32562 00:14:48.343 14:15:49 -- common/autotest_common.sh@1367 -- # local bdev_name=de2908b3-219d-4fab-887b-7e30cbd32562 00:14:48.343 14:15:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:14:48.343 14:15:49 -- common/autotest_common.sh@1369 -- # local bs 00:14:48.343 14:15:49 -- common/autotest_common.sh@1370 -- # local nb 00:14:48.344 14:15:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de2908b3-219d-4fab-887b-7e30cbd32562 00:14:48.654 14:15:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:14:48.654 { 00:14:48.654 "name": "de2908b3-219d-4fab-887b-7e30cbd32562", 00:14:48.654 "aliases": [ 00:14:48.654 "lvs/nvme0n1p0" 00:14:48.654 ], 00:14:48.654 "product_name": "Logical Volume", 00:14:48.654 "block_size": 4096, 00:14:48.654 "num_blocks": 26476544, 00:14:48.654 "uuid": "de2908b3-219d-4fab-887b-7e30cbd32562", 00:14:48.654 "assigned_rate_limits": { 00:14:48.654 "rw_ios_per_sec": 0, 00:14:48.654 "rw_mbytes_per_sec": 0, 00:14:48.654 "r_mbytes_per_sec": 0, 00:14:48.654 "w_mbytes_per_sec": 0 00:14:48.654 }, 00:14:48.654 "claimed": false, 00:14:48.654 "zoned": false, 00:14:48.654 "supported_io_types": { 00:14:48.654 "read": true, 00:14:48.654 "write": true, 00:14:48.654 "unmap": true, 00:14:48.654 "write_zeroes": true, 00:14:48.654 "flush": false, 00:14:48.654 "reset": true, 00:14:48.654 "compare": false, 00:14:48.654 "compare_and_write": false, 00:14:48.654 "abort": false, 00:14:48.654 "nvme_admin": false, 00:14:48.654 "nvme_io": false 00:14:48.654 }, 00:14:48.654 "driver_specific": { 00:14:48.654 "lvol": { 00:14:48.654 "lvol_store_uuid": "783d806b-9824-4329-a4b9-f39dc49d9f64", 00:14:48.654 "base_bdev": "nvme0n1", 00:14:48.654 "thin_provision": true, 00:14:48.654 "snapshot": false, 00:14:48.654 "clone": false, 00:14:48.654 "esnap_clone": false 00:14:48.654 } 00:14:48.654 } 00:14:48.654 } 00:14:48.654 ]' 00:14:48.654 14:15:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:14:48.654 14:15:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:14:48.654 14:15:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:14:48.654 14:15:49 -- common/autotest_common.sh@1373 -- # nb=26476544 00:14:48.654 14:15:49 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:14:48.654 14:15:49 -- common/autotest_common.sh@1377 -- # echo 103424 00:14:48.654 14:15:49 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:14:48.654 14:15:49 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d de2908b3-219d-4fab-887b-7e30cbd32562 -c nvc0n1p0 --l2p_dram_limit 20 00:14:48.915 [2024-12-04 14:15:50.148451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.148490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:48.915 [2024-12-04 14:15:50.148502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:48.915 [2024-12-04 14:15:50.148509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.148551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.148558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:48.915 [2024-12-04 14:15:50.148566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:14:48.915 [2024-12-04 14:15:50.148571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.148585] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:48.915 [2024-12-04 14:15:50.149203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:48.915 [2024-12-04 14:15:50.149220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.149226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:48.915 [2024-12-04 14:15:50.149234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:14:48.915 [2024-12-04 14:15:50.149240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.149263] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ccd0c0ec-c478-493d-8536-d5002dd4031d 00:14:48.915 [2024-12-04 14:15:50.150242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.150266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:48.915 [2024-12-04 14:15:50.150274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:14:48.915 [2024-12-04 14:15:50.150281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.154932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.154964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:48.915 [2024-12-04 14:15:50.154971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.601 ms 00:14:48.915 [2024-12-04 14:15:50.154978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.155044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.155052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:48.915 [2024-12-04 14:15:50.155058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:14:48.915 [2024-12-04 14:15:50.155068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.155112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.155122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:48.915 [2024-12-04 14:15:50.155130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:48.915 [2024-12-04 14:15:50.155137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.155153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:48.915 [2024-12-04 14:15:50.158049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.158074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:48.915 [2024-12-04 14:15:50.158097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:14:48.915 [2024-12-04 14:15:50.158104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.158130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.158136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:48.915 [2024-12-04 14:15:50.158143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:48.915 [2024-12-04 14:15:50.158149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.158167] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:48.915 [2024-12-04 14:15:50.158256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:14:48.915 [2024-12-04 14:15:50.158269] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:48.915 [2024-12-04 14:15:50.158277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:14:48.915 [2024-12-04 14:15:50.158287] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158294] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158301] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:48.915 [2024-12-04 14:15:50.158307] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:48.915 [2024-12-04 14:15:50.158316] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:14:48.915 [2024-12-04 14:15:50.158322] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:14:48.915 [2024-12-04 14:15:50.158328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.158334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:48.915 [2024-12-04 14:15:50.158341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:14:48.915 [2024-12-04 14:15:50.158346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.158391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.915 [2024-12-04 14:15:50.158397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:48.915 [2024-12-04 14:15:50.158404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:14:48.915 [2024-12-04 14:15:50.158409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.915 [2024-12-04 14:15:50.158464] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:48.915 [2024-12-04 14:15:50.158471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:48.915 [2024-12-04 14:15:50.158479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:48.915 [2024-12-04 14:15:50.158500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:48.915 [2024-12-04 14:15:50.158517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:48.915 [2024-12-04 14:15:50.158529] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:48.915 [2024-12-04 14:15:50.158535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:48.915 [2024-12-04 14:15:50.158541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:48.915 [2024-12-04 14:15:50.158548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:48.915 [2024-12-04 14:15:50.158555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:14:48.915 [2024-12-04 14:15:50.158560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:48.915 [2024-12-04 14:15:50.158572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:14:48.915 [2024-12-04 14:15:50.158578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:14:48.915 [2024-12-04 14:15:50.158589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:14:48.915 [2024-12-04 14:15:50.158594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:48.915 [2024-12-04 14:15:50.158605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:48.915 [2024-12-04 14:15:50.158622] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:48.915 [2024-12-04 14:15:50.158638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:48.915 [2024-12-04 14:15:50.158657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:48.915 [2024-12-04 14:15:50.158672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:48.915 [2024-12-04 14:15:50.158684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:48.915 [2024-12-04 14:15:50.158690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:14:48.915 [2024-12-04 14:15:50.158695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:48.915 [2024-12-04 14:15:50.158701] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:48.915 [2024-12-04 14:15:50.158706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:48.915 [2024-12-04 14:15:50.158712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:48.915 [2024-12-04 14:15:50.158717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:48.915 [2024-12-04 14:15:50.158724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:48.915 [2024-12-04 14:15:50.158730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:48.916 [2024-12-04 14:15:50.158737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:48.916 [2024-12-04 14:15:50.158742] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:48.916 [2024-12-04 14:15:50.158750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:48.916 [2024-12-04 14:15:50.158755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:48.916 [2024-12-04 14:15:50.158762] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:48.916 [2024-12-04 14:15:50.158769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:48.916 [2024-12-04 14:15:50.158778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:48.916 [2024-12-04 14:15:50.158784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:14:48.916 [2024-12-04 14:15:50.158790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:14:48.916 [2024-12-04 14:15:50.158795] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:14:48.916 [2024-12-04 14:15:50.158802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:14:48.916 [2024-12-04 14:15:50.158807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:14:48.916 [2024-12-04 14:15:50.158813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:14:48.916 [2024-12-04 14:15:50.158818] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:14:48.916 [2024-12-04 14:15:50.158825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:14:48.916 [2024-12-04 14:15:50.158832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:14:48.916 [2024-12-04 14:15:50.158935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:14:48.916 [2024-12-04 14:15:50.158941] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:14:48.916 [2024-12-04 14:15:50.158949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:14:48.916 [2024-12-04 14:15:50.158954] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:48.916 [2024-12-04 14:15:50.158961] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:48.916 [2024-12-04 14:15:50.158967] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:48.916 [2024-12-04 14:15:50.158974] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:48.916 [2024-12-04 14:15:50.158979] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:48.916 [2024-12-04 14:15:50.158986] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:48.916 [2024-12-04 14:15:50.158992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.158998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:48.916 [2024-12-04 14:15:50.159004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:14:48.916 [2024-12-04 14:15:50.159011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.170822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.170853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:48.916 [2024-12-04 14:15:50.170862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.773 ms 00:14:48.916 [2024-12-04 14:15:50.170868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.170935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.170944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:48.916 [2024-12-04 14:15:50.170950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:14:48.916 [2024-12-04 14:15:50.170957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.224184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.224223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:48.916 [2024-12-04 14:15:50.224234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.194 ms 00:14:48.916 [2024-12-04 14:15:50.224243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.224273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.224285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:48.916 [2024-12-04 14:15:50.224293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:48.916 [2024-12-04 14:15:50.224302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.224638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.224663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:48.916 [2024-12-04 14:15:50.224672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:14:48.916 [2024-12-04 14:15:50.224686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.224790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.224810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:48.916 [2024-12-04 14:15:50.224820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:14:48.916 [2024-12-04 14:15:50.224829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.238557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.238599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:48.916 [2024-12-04 14:15:50.238611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.713 ms 00:14:48.916 [2024-12-04 14:15:50.238619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.250048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:14:48.916 [2024-12-04 14:15:50.255055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.255083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:48.916 [2024-12-04 14:15:50.255104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.366 ms 00:14:48.916 [2024-12-04 14:15:50.255111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.335993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:48.916 [2024-12-04 14:15:50.336049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:48.916 [2024-12-04 14:15:50.336065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.853 ms 00:14:48.916 [2024-12-04 14:15:50.336073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:48.916 [2024-12-04 14:15:50.336129] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:14:48.916 [2024-12-04 14:15:50.336141] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:14:52.246 [2024-12-04 14:15:53.172527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.172586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:52.246 [2024-12-04 14:15:53.172604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2836.381 ms 00:14:52.246 [2024-12-04 14:15:53.172612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.172802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.172813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:52.246 [2024-12-04 14:15:53.172823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:14:52.246 [2024-12-04 14:15:53.172831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.196746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.196779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:52.246 [2024-12-04 14:15:53.196792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.874 ms 00:14:52.246 [2024-12-04 14:15:53.196806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.220299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.220329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:52.246 [2024-12-04 14:15:53.220343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.458 ms 00:14:52.246 [2024-12-04 14:15:53.220350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.220712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.220723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:52.246 [2024-12-04 14:15:53.220732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:14:52.246 [2024-12-04 14:15:53.220739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.285004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.285038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:52.246 [2024-12-04 14:15:53.285051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.236 ms 00:14:52.246 [2024-12-04 14:15:53.285059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.309960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.309992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:52.246 [2024-12-04 14:15:53.310005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.845 ms 00:14:52.246 [2024-12-04 14:15:53.310012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.311362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.311521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:14:52.246 [2024-12-04 14:15:53.311541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:14:52.246 [2024-12-04 14:15:53.311551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.335436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.335464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:52.246 [2024-12-04 14:15:53.335476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.851 ms 00:14:52.246 [2024-12-04 14:15:53.335483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.335519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.335527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:52.246 [2024-12-04 14:15:53.335539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:52.246 [2024-12-04 14:15:53.335546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.335622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:52.246 [2024-12-04 14:15:53.335631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:52.246 [2024-12-04 14:15:53.335641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:14:52.246 [2024-12-04 14:15:53.335648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:52.246 [2024-12-04 14:15:53.336472] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3187.599 ms, result 0 00:14:52.246 { 00:14:52.246 "name": "ftl0", 00:14:52.246 "uuid": "ccd0c0ec-c478-493d-8536-d5002dd4031d" 00:14:52.246 } 00:14:52.246 14:15:53 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:14:52.246 14:15:53 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:14:52.246 14:15:53 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:14:52.246 14:15:53 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:14:52.246 [2024-12-04 14:15:53.636809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:14:52.246 I/O size of 69632 is greater than zero copy threshold (65536). 00:14:52.246 Zero copy mechanism will not be used. 00:14:52.246 Running I/O for 4 seconds... 00:14:56.459 00:14:56.459 Latency(us) 00:14:56.459 [2024-12-04T14:15:57.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.459 [2024-12-04T14:15:57.924Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:14:56.459 ftl0 : 4.00 744.72 49.45 0.00 0.00 1422.49 406.45 1966.08 00:14:56.459 [2024-12-04T14:15:57.924Z] =================================================================================================================== 00:14:56.459 [2024-12-04T14:15:57.924Z] Total : 744.72 49.45 0.00 0.00 1422.49 406.45 1966.08 00:14:56.459 0 00:14:56.459 [2024-12-04 14:15:57.645140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:14:56.459 14:15:57 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:14:56.459 [2024-12-04 14:15:57.743927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:14:56.459 Running I/O for 4 seconds... 00:15:00.671 00:15:00.671 Latency(us) 00:15:00.671 [2024-12-04T14:16:02.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.671 [2024-12-04T14:16:02.136Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.671 ftl0 : 4.03 5880.07 22.97 0.00 0.00 21683.06 315.08 49000.76 00:15:00.671 [2024-12-04T14:16:02.136Z] =================================================================================================================== 00:15:00.671 [2024-12-04T14:16:02.136Z] Total : 5880.07 22.97 0.00 0.00 21683.06 0.00 49000.76 00:15:00.671 [2024-12-04 14:16:01.783722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:15:00.671 l0 00:15:00.671 14:16:01 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:00.671 [2024-12-04 14:16:01.887155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:00.671 Running I/O for 4 seconds... 00:15:04.882 00:15:04.882 Latency(us) 00:15:04.882 [2024-12-04T14:16:06.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.882 [2024-12-04T14:16:06.347Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:04.882 Verification LBA range: start 0x0 length 0x1400000 00:15:04.882 ftl0 : 4.01 9677.64 37.80 0.00 0.00 13197.64 217.40 23895.43 00:15:04.882 [2024-12-04T14:16:06.347Z] =================================================================================================================== 00:15:04.882 [2024-12-04T14:16:06.347Z] Total : 9677.64 37.80 0.00 0.00 13197.64 0.00 23895.43 00:15:04.882 [2024-12-04 14:16:05.907549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft0 00:15:04.882 l0 00:15:04.882 14:16:05 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:04.882 [2024-12-04 14:16:06.089250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.882 [2024-12-04 14:16:06.089292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:04.882 [2024-12-04 14:16:06.089307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:04.882 [2024-12-04 14:16:06.089315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.882 [2024-12-04 14:16:06.089338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:04.882 [2024-12-04 14:16:06.091946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.882 [2024-12-04 14:16:06.091979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:04.883 [2024-12-04 14:16:06.091988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:15:04.883 [2024-12-04 14:16:06.092000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.094312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.094432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:04.883 [2024-12-04 14:16:06.094448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:15:04.883 [2024-12-04 14:16:06.094457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.273112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.273162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:04.883 [2024-12-04 14:16:06.273177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.636 ms 00:15:04.883 [2024-12-04 14:16:06.273187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.279267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.279297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:04.883 [2024-12-04 14:16:06.279307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.050 ms 00:15:04.883 [2024-12-04 14:16:06.279316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.303257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.303296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:04.883 [2024-12-04 14:16:06.303307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.894 ms 00:15:04.883 [2024-12-04 14:16:06.303319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.318946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.319078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:04.883 [2024-12-04 14:16:06.319103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.594 ms 00:15:04.883 [2024-12-04 14:16:06.319113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.319245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.319258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:04.883 [2024-12-04 14:16:06.319267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:15:04.883 [2024-12-04 14:16:06.319275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:04.883 [2024-12-04 14:16:06.343077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:04.883 [2024-12-04 14:16:06.343119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:04.883 [2024-12-04 14:16:06.343129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.787 ms 00:15:04.883 [2024-12-04 14:16:06.343137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.144 [2024-12-04 14:16:06.366767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.144 [2024-12-04 14:16:06.366802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:05.144 [2024-12-04 14:16:06.366812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.598 ms 00:15:05.144 [2024-12-04 14:16:06.366823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.144 [2024-12-04 14:16:06.389484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.144 [2024-12-04 14:16:06.389604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:05.144 [2024-12-04 14:16:06.389619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.631 ms 00:15:05.144 [2024-12-04 14:16:06.389627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.144 [2024-12-04 14:16:06.412730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.144 [2024-12-04 14:16:06.412836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:05.144 [2024-12-04 14:16:06.412850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.047 ms 00:15:05.144 [2024-12-04 14:16:06.412858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.144 [2024-12-04 14:16:06.412883] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:05.144 [2024-12-04 14:16:06.412898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.412992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:05.144 [2024-12-04 14:16:06.413353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:05.145 [2024-12-04 14:16:06.413762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:05.145 [2024-12-04 14:16:06.413770] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ccd0c0ec-c478-493d-8536-d5002dd4031d 00:15:05.145 [2024-12-04 14:16:06.413780] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:05.145 [2024-12-04 14:16:06.413787] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:05.145 [2024-12-04 14:16:06.413795] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:05.145 [2024-12-04 14:16:06.413803] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:05.145 [2024-12-04 14:16:06.413811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:05.145 [2024-12-04 14:16:06.413820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:05.145 [2024-12-04 14:16:06.413828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:05.145 [2024-12-04 14:16:06.413834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:05.145 [2024-12-04 14:16:06.413841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:05.145 [2024-12-04 14:16:06.413849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.145 [2024-12-04 14:16:06.413857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:05.145 [2024-12-04 14:16:06.413865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:15:05.145 [2024-12-04 14:16:06.413873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.426453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.145 [2024-12-04 14:16:06.426482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:05.145 [2024-12-04 14:16:06.426491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.554 ms 00:15:05.145 [2024-12-04 14:16:06.426504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.426694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:05.145 [2024-12-04 14:16:06.426704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:05.145 [2024-12-04 14:16:06.426712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:15:05.145 [2024-12-04 14:16:06.426720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.463726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.463768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:05.145 [2024-12-04 14:16:06.463780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.463789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.463841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.463850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:05.145 [2024-12-04 14:16:06.463858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.463866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.463921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.463933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:05.145 [2024-12-04 14:16:06.463941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.463953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.463967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.463976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:05.145 [2024-12-04 14:16:06.463983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.463992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.539493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.539538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:05.145 [2024-12-04 14:16:06.539550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.539562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.569068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.569120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:05.145 [2024-12-04 14:16:06.569131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.569141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.569201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.569212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:05.145 [2024-12-04 14:16:06.569220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.145 [2024-12-04 14:16:06.569231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.145 [2024-12-04 14:16:06.569277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.145 [2024-12-04 14:16:06.569295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:05.146 [2024-12-04 14:16:06.569305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.146 [2024-12-04 14:16:06.569313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.146 [2024-12-04 14:16:06.569400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.146 [2024-12-04 14:16:06.569412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:05.146 [2024-12-04 14:16:06.569419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.146 [2024-12-04 14:16:06.569428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.146 [2024-12-04 14:16:06.569454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.146 [2024-12-04 14:16:06.569466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:05.146 [2024-12-04 14:16:06.569473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.146 [2024-12-04 14:16:06.569482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.146 [2024-12-04 14:16:06.569515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.146 [2024-12-04 14:16:06.569525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:05.146 [2024-12-04 14:16:06.569532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.146 [2024-12-04 14:16:06.569542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.146 [2024-12-04 14:16:06.569585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:05.146 [2024-12-04 14:16:06.569596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:05.146 [2024-12-04 14:16:06.569604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:05.146 [2024-12-04 14:16:06.569612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:05.146 [2024-12-04 14:16:06.569730] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 480.444 ms, result 0 00:15:05.146 true 00:15:05.146 14:16:06 -- ftl/bdevperf.sh@37 -- # killprocess 71239 00:15:05.146 14:16:06 -- common/autotest_common.sh@936 -- # '[' -z 71239 ']' 00:15:05.146 14:16:06 -- common/autotest_common.sh@940 -- # kill -0 71239 00:15:05.146 14:16:06 -- common/autotest_common.sh@941 -- # uname 00:15:05.146 14:16:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.146 14:16:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71239 00:15:05.406 killing process with pid 71239 00:15:05.406 Received shutdown signal, test time was about 4.000000 seconds 00:15:05.406 00:15:05.406 Latency(us) 00:15:05.406 [2024-12-04T14:16:06.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.406 [2024-12-04T14:16:06.871Z] =================================================================================================================== 00:15:05.406 [2024-12-04T14:16:06.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.406 14:16:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.406 14:16:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.406 14:16:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71239' 00:15:05.406 14:16:06 -- common/autotest_common.sh@955 -- # kill 71239 00:15:05.406 14:16:06 -- common/autotest_common.sh@960 -- # wait 71239 00:15:10.736 14:16:11 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:15:10.736 14:16:11 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:15:10.736 14:16:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.736 14:16:11 -- common/autotest_common.sh@10 -- # set +x 00:15:10.736 Remove shared memory files 00:15:10.736 14:16:12 -- ftl/bdevperf.sh@41 -- # remove_shm 00:15:10.736 14:16:12 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:10.736 14:16:12 -- ftl/common.sh@205 -- # rm -f rm -f 00:15:10.736 14:16:12 -- ftl/common.sh@206 -- # rm -f rm -f 00:15:10.736 14:16:12 -- ftl/common.sh@207 -- # rm -f rm -f 00:15:10.736 14:16:12 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:10.736 14:16:12 -- ftl/common.sh@209 -- # rm -f rm -f 00:15:10.736 ************************************ 00:15:10.736 END TEST ftl_bdevperf 00:15:10.736 ************************************ 00:15:10.736 00:15:10.736 real 0m25.606s 00:15:10.736 user 0m27.979s 00:15:10.736 sys 0m0.903s 00:15:10.736 14:16:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:10.736 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:10.736 14:16:12 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:10.736 14:16:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:10.736 14:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.736 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:10.736 ************************************ 00:15:10.736 START TEST ftl_trim 00:15:10.736 ************************************ 00:15:10.736 14:16:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:15:10.736 * Looking for test storage... 00:15:10.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:10.736 14:16:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:10.736 14:16:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:10.736 14:16:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:10.997 14:16:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:10.997 14:16:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:10.997 14:16:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:10.998 14:16:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:10.998 14:16:12 -- scripts/common.sh@335 -- # IFS=.-: 00:15:10.998 14:16:12 -- scripts/common.sh@335 -- # read -ra ver1 00:15:10.998 14:16:12 -- scripts/common.sh@336 -- # IFS=.-: 00:15:10.998 14:16:12 -- scripts/common.sh@336 -- # read -ra ver2 00:15:10.998 14:16:12 -- scripts/common.sh@337 -- # local 'op=<' 00:15:10.998 14:16:12 -- scripts/common.sh@339 -- # ver1_l=2 00:15:10.998 14:16:12 -- scripts/common.sh@340 -- # ver2_l=1 00:15:10.998 14:16:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:10.998 14:16:12 -- scripts/common.sh@343 -- # case "$op" in 00:15:10.998 14:16:12 -- scripts/common.sh@344 -- # : 1 00:15:10.998 14:16:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:10.998 14:16:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:10.998 14:16:12 -- scripts/common.sh@364 -- # decimal 1 00:15:10.998 14:16:12 -- scripts/common.sh@352 -- # local d=1 00:15:10.998 14:16:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:10.998 14:16:12 -- scripts/common.sh@354 -- # echo 1 00:15:10.998 14:16:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:10.998 14:16:12 -- scripts/common.sh@365 -- # decimal 2 00:15:10.998 14:16:12 -- scripts/common.sh@352 -- # local d=2 00:15:10.998 14:16:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:10.998 14:16:12 -- scripts/common.sh@354 -- # echo 2 00:15:10.998 14:16:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:10.998 14:16:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:10.998 14:16:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:10.998 14:16:12 -- scripts/common.sh@367 -- # return 0 00:15:10.998 14:16:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:10.998 14:16:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:10.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.998 --rc genhtml_branch_coverage=1 00:15:10.998 --rc genhtml_function_coverage=1 00:15:10.998 --rc genhtml_legend=1 00:15:10.998 --rc geninfo_all_blocks=1 00:15:10.998 --rc geninfo_unexecuted_blocks=1 00:15:10.998 00:15:10.998 ' 00:15:10.998 14:16:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:10.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.998 --rc genhtml_branch_coverage=1 00:15:10.998 --rc genhtml_function_coverage=1 00:15:10.998 --rc genhtml_legend=1 00:15:10.998 --rc geninfo_all_blocks=1 00:15:10.998 --rc geninfo_unexecuted_blocks=1 00:15:10.998 00:15:10.998 ' 00:15:10.998 14:16:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:10.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.998 --rc genhtml_branch_coverage=1 00:15:10.998 --rc genhtml_function_coverage=1 00:15:10.998 --rc genhtml_legend=1 00:15:10.998 --rc geninfo_all_blocks=1 00:15:10.998 --rc geninfo_unexecuted_blocks=1 00:15:10.998 00:15:10.998 ' 00:15:10.998 14:16:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:10.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:10.998 --rc genhtml_branch_coverage=1 00:15:10.998 --rc genhtml_function_coverage=1 00:15:10.998 --rc genhtml_legend=1 00:15:10.998 --rc geninfo_all_blocks=1 00:15:10.998 --rc geninfo_unexecuted_blocks=1 00:15:10.998 00:15:10.998 ' 00:15:10.998 14:16:12 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:10.998 14:16:12 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:10.998 14:16:12 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:10.998 14:16:12 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:10.998 14:16:12 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:10.998 14:16:12 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:10.998 14:16:12 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.998 14:16:12 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:10.998 14:16:12 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:10.998 14:16:12 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:10.998 14:16:12 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:10.998 14:16:12 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:10.998 14:16:12 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:10.998 14:16:12 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:10.998 14:16:12 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:10.998 14:16:12 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:10.998 14:16:12 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:10.998 14:16:12 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:10.998 14:16:12 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:10.998 14:16:12 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:10.998 14:16:12 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:10.998 14:16:12 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:10.998 14:16:12 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:10.998 14:16:12 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:10.998 14:16:12 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:10.998 14:16:12 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:10.998 14:16:12 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:10.998 14:16:12 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:10.998 14:16:12 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:10.998 14:16:12 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.998 14:16:12 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:15:10.998 14:16:12 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:15:10.998 14:16:12 -- ftl/trim.sh@25 -- # timeout=240 00:15:10.998 14:16:12 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:10.998 14:16:12 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:10.998 14:16:12 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:10.998 14:16:12 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:10.998 14:16:12 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:10.998 14:16:12 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:10.998 14:16:12 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:10.998 14:16:12 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:10.998 14:16:12 -- ftl/trim.sh@40 -- # svcpid=71643 00:15:10.998 14:16:12 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:10.998 14:16:12 -- ftl/trim.sh@41 -- # waitforlisten 71643 00:15:10.998 14:16:12 -- common/autotest_common.sh@829 -- # '[' -z 71643 ']' 00:15:10.998 14:16:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.998 14:16:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.998 14:16:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.998 14:16:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.998 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:15:10.998 [2024-12-04 14:16:12.312954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:10.998 [2024-12-04 14:16:12.313067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71643 ] 00:15:11.260 [2024-12-04 14:16:12.463391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:11.260 [2024-12-04 14:16:12.640019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.260 [2024-12-04 14:16:12.640447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.260 [2024-12-04 14:16:12.640747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.260 [2024-12-04 14:16:12.640863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.648 14:16:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.648 14:16:13 -- common/autotest_common.sh@862 -- # return 0 00:15:12.648 14:16:13 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:12.648 14:16:13 -- ftl/common.sh@54 -- # local name=nvme0 00:15:12.648 14:16:13 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:12.648 14:16:13 -- ftl/common.sh@56 -- # local size=103424 00:15:12.648 14:16:13 -- ftl/common.sh@59 -- # local base_bdev 00:15:12.648 14:16:13 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:12.648 14:16:14 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:12.648 14:16:14 -- ftl/common.sh@62 -- # local base_size 00:15:12.648 14:16:14 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:12.648 14:16:14 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:12.648 14:16:14 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:12.648 14:16:14 -- common/autotest_common.sh@1369 -- # local bs 00:15:12.648 14:16:14 -- common/autotest_common.sh@1370 -- # local nb 00:15:12.648 14:16:14 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:12.907 14:16:14 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:12.907 { 00:15:12.907 "name": "nvme0n1", 00:15:12.907 "aliases": [ 00:15:12.907 "e6b05b9a-4105-4129-ae83-005593eb7d5a" 00:15:12.907 ], 00:15:12.907 "product_name": "NVMe disk", 00:15:12.907 "block_size": 4096, 00:15:12.907 "num_blocks": 1310720, 00:15:12.907 "uuid": "e6b05b9a-4105-4129-ae83-005593eb7d5a", 00:15:12.907 "assigned_rate_limits": { 00:15:12.907 "rw_ios_per_sec": 0, 00:15:12.907 "rw_mbytes_per_sec": 0, 00:15:12.907 "r_mbytes_per_sec": 0, 00:15:12.907 "w_mbytes_per_sec": 0 00:15:12.907 }, 00:15:12.907 "claimed": true, 00:15:12.907 "claim_type": "read_many_write_one", 00:15:12.907 "zoned": false, 00:15:12.907 "supported_io_types": { 00:15:12.907 "read": true, 00:15:12.907 "write": true, 00:15:12.907 "unmap": true, 00:15:12.907 "write_zeroes": true, 00:15:12.907 "flush": true, 00:15:12.907 "reset": true, 00:15:12.907 "compare": true, 00:15:12.907 "compare_and_write": false, 00:15:12.907 "abort": true, 00:15:12.907 "nvme_admin": true, 00:15:12.907 "nvme_io": true 00:15:12.907 }, 00:15:12.907 "driver_specific": { 00:15:12.907 "nvme": [ 00:15:12.907 { 00:15:12.907 "pci_address": "0000:00:07.0", 00:15:12.907 "trid": { 00:15:12.907 "trtype": "PCIe", 00:15:12.907 "traddr": "0000:00:07.0" 00:15:12.907 }, 00:15:12.907 "ctrlr_data": { 00:15:12.907 "cntlid": 0, 00:15:12.907 "vendor_id": "0x1b36", 00:15:12.907 "model_number": "QEMU NVMe Ctrl", 00:15:12.907 "serial_number": "12341", 00:15:12.907 "firmware_revision": "8.0.0", 00:15:12.907 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:12.907 "oacs": { 00:15:12.907 "security": 0, 00:15:12.907 "format": 1, 00:15:12.907 "firmware": 0, 00:15:12.907 "ns_manage": 1 00:15:12.907 }, 00:15:12.907 "multi_ctrlr": false, 00:15:12.907 "ana_reporting": false 00:15:12.907 }, 00:15:12.907 "vs": { 00:15:12.907 "nvme_version": "1.4" 00:15:12.907 }, 00:15:12.907 "ns_data": { 00:15:12.907 "id": 1, 00:15:12.907 "can_share": false 00:15:12.907 } 00:15:12.907 } 00:15:12.907 ], 00:15:12.908 "mp_policy": "active_passive" 00:15:12.908 } 00:15:12.908 } 00:15:12.908 ]' 00:15:12.908 14:16:14 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:12.908 14:16:14 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:12.908 14:16:14 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:12.908 14:16:14 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:12.908 14:16:14 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:12.908 14:16:14 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:12.908 14:16:14 -- ftl/common.sh@63 -- # base_size=5120 00:15:12.908 14:16:14 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:12.908 14:16:14 -- ftl/common.sh@67 -- # clear_lvols 00:15:12.908 14:16:14 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:12.908 14:16:14 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:13.166 14:16:14 -- ftl/common.sh@28 -- # stores=783d806b-9824-4329-a4b9-f39dc49d9f64 00:15:13.166 14:16:14 -- ftl/common.sh@29 -- # for lvs in $stores 00:15:13.166 14:16:14 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 783d806b-9824-4329-a4b9-f39dc49d9f64 00:15:13.425 14:16:14 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:13.684 14:16:14 -- ftl/common.sh@68 -- # lvs=5717eb30-7b1e-49a7-a4b0-780e97f006bd 00:15:13.684 14:16:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5717eb30-7b1e-49a7-a4b0-780e97f006bd 00:15:13.942 14:16:15 -- ftl/trim.sh@43 -- # split_bdev=73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:13.942 14:16:15 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:13.942 14:16:15 -- ftl/common.sh@35 -- # local name=nvc0 00:15:13.942 14:16:15 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:13.942 14:16:15 -- ftl/common.sh@37 -- # local base_bdev=73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:13.942 14:16:15 -- ftl/common.sh@38 -- # local cache_size= 00:15:13.942 14:16:15 -- ftl/common.sh@41 -- # get_bdev_size 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:13.942 14:16:15 -- common/autotest_common.sh@1367 -- # local bdev_name=73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:13.942 14:16:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:13.942 14:16:15 -- common/autotest_common.sh@1369 -- # local bs 00:15:13.942 14:16:15 -- common/autotest_common.sh@1370 -- # local nb 00:15:13.942 14:16:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:13.942 14:16:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:13.942 { 00:15:13.942 "name": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:13.942 "aliases": [ 00:15:13.942 "lvs/nvme0n1p0" 00:15:13.942 ], 00:15:13.942 "product_name": "Logical Volume", 00:15:13.942 "block_size": 4096, 00:15:13.942 "num_blocks": 26476544, 00:15:13.942 "uuid": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:13.942 "assigned_rate_limits": { 00:15:13.942 "rw_ios_per_sec": 0, 00:15:13.942 "rw_mbytes_per_sec": 0, 00:15:13.942 "r_mbytes_per_sec": 0, 00:15:13.942 "w_mbytes_per_sec": 0 00:15:13.942 }, 00:15:13.942 "claimed": false, 00:15:13.942 "zoned": false, 00:15:13.942 "supported_io_types": { 00:15:13.942 "read": true, 00:15:13.942 "write": true, 00:15:13.942 "unmap": true, 00:15:13.942 "write_zeroes": true, 00:15:13.942 "flush": false, 00:15:13.942 "reset": true, 00:15:13.942 "compare": false, 00:15:13.942 "compare_and_write": false, 00:15:13.942 "abort": false, 00:15:13.942 "nvme_admin": false, 00:15:13.942 "nvme_io": false 00:15:13.942 }, 00:15:13.942 "driver_specific": { 00:15:13.942 "lvol": { 00:15:13.942 "lvol_store_uuid": "5717eb30-7b1e-49a7-a4b0-780e97f006bd", 00:15:13.942 "base_bdev": "nvme0n1", 00:15:13.942 "thin_provision": true, 00:15:13.942 "snapshot": false, 00:15:13.942 "clone": false, 00:15:13.942 "esnap_clone": false 00:15:13.942 } 00:15:13.942 } 00:15:13.942 } 00:15:13.942 ]' 00:15:13.942 14:16:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:13.942 14:16:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:13.942 14:16:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:14.199 14:16:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:14.199 14:16:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:14.199 14:16:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:14.199 14:16:15 -- ftl/common.sh@41 -- # local base_size=5171 00:15:14.199 14:16:15 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:14.199 14:16:15 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:14.199 14:16:15 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:14.199 14:16:15 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:14.199 14:16:15 -- ftl/common.sh@48 -- # get_bdev_size 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:14.199 14:16:15 -- common/autotest_common.sh@1367 -- # local bdev_name=73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:14.199 14:16:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:14.199 14:16:15 -- common/autotest_common.sh@1369 -- # local bs 00:15:14.199 14:16:15 -- common/autotest_common.sh@1370 -- # local nb 00:15:14.199 14:16:15 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:14.457 14:16:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:14.457 { 00:15:14.457 "name": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:14.457 "aliases": [ 00:15:14.457 "lvs/nvme0n1p0" 00:15:14.457 ], 00:15:14.457 "product_name": "Logical Volume", 00:15:14.457 "block_size": 4096, 00:15:14.457 "num_blocks": 26476544, 00:15:14.457 "uuid": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:14.457 "assigned_rate_limits": { 00:15:14.457 "rw_ios_per_sec": 0, 00:15:14.457 "rw_mbytes_per_sec": 0, 00:15:14.457 "r_mbytes_per_sec": 0, 00:15:14.457 "w_mbytes_per_sec": 0 00:15:14.457 }, 00:15:14.457 "claimed": false, 00:15:14.457 "zoned": false, 00:15:14.457 "supported_io_types": { 00:15:14.457 "read": true, 00:15:14.457 "write": true, 00:15:14.457 "unmap": true, 00:15:14.457 "write_zeroes": true, 00:15:14.457 "flush": false, 00:15:14.457 "reset": true, 00:15:14.457 "compare": false, 00:15:14.457 "compare_and_write": false, 00:15:14.457 "abort": false, 00:15:14.457 "nvme_admin": false, 00:15:14.457 "nvme_io": false 00:15:14.457 }, 00:15:14.457 "driver_specific": { 00:15:14.457 "lvol": { 00:15:14.457 "lvol_store_uuid": "5717eb30-7b1e-49a7-a4b0-780e97f006bd", 00:15:14.457 "base_bdev": "nvme0n1", 00:15:14.457 "thin_provision": true, 00:15:14.457 "snapshot": false, 00:15:14.457 "clone": false, 00:15:14.457 "esnap_clone": false 00:15:14.457 } 00:15:14.457 } 00:15:14.457 } 00:15:14.457 ]' 00:15:14.457 14:16:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:14.457 14:16:15 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:14.457 14:16:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:14.457 14:16:15 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:14.457 14:16:15 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:14.457 14:16:15 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:14.457 14:16:15 -- ftl/common.sh@48 -- # cache_size=5171 00:15:14.457 14:16:15 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:14.714 14:16:16 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:14.714 14:16:16 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:14.714 14:16:16 -- ftl/trim.sh@47 -- # get_bdev_size 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:14.714 14:16:16 -- common/autotest_common.sh@1367 -- # local bdev_name=73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:14.714 14:16:16 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:14.714 14:16:16 -- common/autotest_common.sh@1369 -- # local bs 00:15:14.714 14:16:16 -- common/autotest_common.sh@1370 -- # local nb 00:15:14.714 14:16:16 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73c9738e-8f28-42c5-a204-9ae3fae1ed83 00:15:14.973 14:16:16 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:14.973 { 00:15:14.973 "name": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:14.973 "aliases": [ 00:15:14.973 "lvs/nvme0n1p0" 00:15:14.973 ], 00:15:14.973 "product_name": "Logical Volume", 00:15:14.973 "block_size": 4096, 00:15:14.973 "num_blocks": 26476544, 00:15:14.973 "uuid": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:14.973 "assigned_rate_limits": { 00:15:14.973 "rw_ios_per_sec": 0, 00:15:14.973 "rw_mbytes_per_sec": 0, 00:15:14.973 "r_mbytes_per_sec": 0, 00:15:14.973 "w_mbytes_per_sec": 0 00:15:14.973 }, 00:15:14.973 "claimed": false, 00:15:14.973 "zoned": false, 00:15:14.973 "supported_io_types": { 00:15:14.973 "read": true, 00:15:14.973 "write": true, 00:15:14.973 "unmap": true, 00:15:14.973 "write_zeroes": true, 00:15:14.973 "flush": false, 00:15:14.973 "reset": true, 00:15:14.973 "compare": false, 00:15:14.973 "compare_and_write": false, 00:15:14.973 "abort": false, 00:15:14.973 "nvme_admin": false, 00:15:14.973 "nvme_io": false 00:15:14.973 }, 00:15:14.973 "driver_specific": { 00:15:14.973 "lvol": { 00:15:14.973 "lvol_store_uuid": "5717eb30-7b1e-49a7-a4b0-780e97f006bd", 00:15:14.973 "base_bdev": "nvme0n1", 00:15:14.973 "thin_provision": true, 00:15:14.973 "snapshot": false, 00:15:14.973 "clone": false, 00:15:14.973 "esnap_clone": false 00:15:14.973 } 00:15:14.973 } 00:15:14.973 } 00:15:14.973 ]' 00:15:14.973 14:16:16 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:14.973 14:16:16 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:14.973 14:16:16 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:14.973 14:16:16 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:14.973 14:16:16 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:14.973 14:16:16 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:14.973 14:16:16 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:14.973 14:16:16 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 73c9738e-8f28-42c5-a204-9ae3fae1ed83 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:15.233 [2024-12-04 14:16:16.484641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.484691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:15.233 [2024-12-04 14:16:16.484707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:15.233 [2024-12-04 14:16:16.484716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.487168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.487202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:15.233 [2024-12-04 14:16:16.487212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:15:15.233 [2024-12-04 14:16:16.487219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.487316] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:15.233 [2024-12-04 14:16:16.487892] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:15.233 [2024-12-04 14:16:16.487918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.487925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:15.233 [2024-12-04 14:16:16.487934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:15:15.233 [2024-12-04 14:16:16.487941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.488103] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a7677b12-c522-4546-9c0d-e96917bc5b1d 00:15:15.233 [2024-12-04 14:16:16.489393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.489425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:15.233 [2024-12-04 14:16:16.489434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:15.233 [2024-12-04 14:16:16.489443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.496270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.496300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:15.233 [2024-12-04 14:16:16.496308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.758 ms 00:15:15.233 [2024-12-04 14:16:16.496316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.496428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.496448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:15.233 [2024-12-04 14:16:16.496456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:15.233 [2024-12-04 14:16:16.496467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.496500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.496509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:15.233 [2024-12-04 14:16:16.496517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:15.233 [2024-12-04 14:16:16.496524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.496556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:15.233 [2024-12-04 14:16:16.499855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.499882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:15.233 [2024-12-04 14:16:16.499893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:15:15.233 [2024-12-04 14:16:16.499901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.499969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.499977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:15.233 [2024-12-04 14:16:16.499986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:15.233 [2024-12-04 14:16:16.499992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.500021] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:15.233 [2024-12-04 14:16:16.500123] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:15.233 [2024-12-04 14:16:16.500141] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:15.233 [2024-12-04 14:16:16.500150] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:15.233 [2024-12-04 14:16:16.500160] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:15.233 [2024-12-04 14:16:16.500169] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:15.233 [2024-12-04 14:16:16.500178] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:15.233 [2024-12-04 14:16:16.500185] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:15.233 [2024-12-04 14:16:16.500193] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:15.233 [2024-12-04 14:16:16.500199] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:15.233 [2024-12-04 14:16:16.500207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.500214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:15.233 [2024-12-04 14:16:16.500222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:15:15.233 [2024-12-04 14:16:16.500229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.500296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.233 [2024-12-04 14:16:16.500308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:15.233 [2024-12-04 14:16:16.500317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:15.233 [2024-12-04 14:16:16.500322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.233 [2024-12-04 14:16:16.500414] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:15.233 [2024-12-04 14:16:16.500424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:15.233 [2024-12-04 14:16:16.500433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:15.233 [2024-12-04 14:16:16.500439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:15.234 [2024-12-04 14:16:16.500452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:15.234 [2024-12-04 14:16:16.500473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:15.234 [2024-12-04 14:16:16.500484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:15.234 [2024-12-04 14:16:16.500489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:15.234 [2024-12-04 14:16:16.500496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:15.234 [2024-12-04 14:16:16.500501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:15.234 [2024-12-04 14:16:16.500509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:15.234 [2024-12-04 14:16:16.500514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:15.234 [2024-12-04 14:16:16.500527] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:15.234 [2024-12-04 14:16:16.500534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:15.234 [2024-12-04 14:16:16.500546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:15.234 [2024-12-04 14:16:16.500552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:15.234 [2024-12-04 14:16:16.500566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:15.234 [2024-12-04 14:16:16.500584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:15.234 [2024-12-04 14:16:16.500602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:15.234 [2024-12-04 14:16:16.500621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:15.234 [2024-12-04 14:16:16.500639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:15.234 [2024-12-04 14:16:16.500650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:15.234 [2024-12-04 14:16:16.500656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:15.234 [2024-12-04 14:16:16.500661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:15.234 [2024-12-04 14:16:16.500669] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:15.234 [2024-12-04 14:16:16.500675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:15.234 [2024-12-04 14:16:16.500681] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:15.234 [2024-12-04 14:16:16.500696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:15.234 [2024-12-04 14:16:16.500702] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:15.234 [2024-12-04 14:16:16.500708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:15.234 [2024-12-04 14:16:16.500713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:15.234 [2024-12-04 14:16:16.500721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:15.234 [2024-12-04 14:16:16.500726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:15.234 [2024-12-04 14:16:16.500734] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:15.234 [2024-12-04 14:16:16.500743] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:15.234 [2024-12-04 14:16:16.500752] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:15.234 [2024-12-04 14:16:16.500759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:15.234 [2024-12-04 14:16:16.500766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:15.234 [2024-12-04 14:16:16.500772] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:15.234 [2024-12-04 14:16:16.500779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:15.234 [2024-12-04 14:16:16.500785] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:15.234 [2024-12-04 14:16:16.500792] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:15.234 [2024-12-04 14:16:16.500798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:15.234 [2024-12-04 14:16:16.500805] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:15.234 [2024-12-04 14:16:16.500810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:15.234 [2024-12-04 14:16:16.500817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:15.234 [2024-12-04 14:16:16.500823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:15.234 [2024-12-04 14:16:16.500833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:15.234 [2024-12-04 14:16:16.500840] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:15.234 [2024-12-04 14:16:16.500848] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:15.234 [2024-12-04 14:16:16.500854] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:15.234 [2024-12-04 14:16:16.500861] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:15.234 [2024-12-04 14:16:16.500867] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:15.234 [2024-12-04 14:16:16.500874] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:15.234 [2024-12-04 14:16:16.500880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.500888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:15.234 [2024-12-04 14:16:16.500893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:15:15.234 [2024-12-04 14:16:16.500901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.514874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.514910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:15.234 [2024-12-04 14:16:16.514920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.905 ms 00:15:15.234 [2024-12-04 14:16:16.514929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.515040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.515053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:15.234 [2024-12-04 14:16:16.515063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:15:15.234 [2024-12-04 14:16:16.515071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.543118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.543158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:15.234 [2024-12-04 14:16:16.543168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.001 ms 00:15:15.234 [2024-12-04 14:16:16.543176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.543233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.543243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:15.234 [2024-12-04 14:16:16.543249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:15.234 [2024-12-04 14:16:16.543260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.543654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.543679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:15.234 [2024-12-04 14:16:16.543687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:15:15.234 [2024-12-04 14:16:16.543694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.543793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.543805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:15.234 [2024-12-04 14:16:16.543811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:15.234 [2024-12-04 14:16:16.543820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.571012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.234 [2024-12-04 14:16:16.571060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:15.234 [2024-12-04 14:16:16.571076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.167 ms 00:15:15.234 [2024-12-04 14:16:16.571103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.234 [2024-12-04 14:16:16.583168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:15.234 [2024-12-04 14:16:16.598637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.235 [2024-12-04 14:16:16.598665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:15.235 [2024-12-04 14:16:16.598676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.390 ms 00:15:15.235 [2024-12-04 14:16:16.598683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.235 [2024-12-04 14:16:16.672785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.235 [2024-12-04 14:16:16.672820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:15.235 [2024-12-04 14:16:16.672832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.038 ms 00:15:15.235 [2024-12-04 14:16:16.672839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.235 [2024-12-04 14:16:16.672891] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:15.235 [2024-12-04 14:16:16.672902] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:17.765 [2024-12-04 14:16:19.106958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.765 [2024-12-04 14:16:19.107026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:17.765 [2024-12-04 14:16:19.107044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2434.052 ms 00:15:17.765 [2024-12-04 14:16:19.107053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.765 [2024-12-04 14:16:19.107291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.765 [2024-12-04 14:16:19.107308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:17.765 [2024-12-04 14:16:19.107321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:15:17.765 [2024-12-04 14:16:19.107329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.765 [2024-12-04 14:16:19.131624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.765 [2024-12-04 14:16:19.131662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:17.765 [2024-12-04 14:16:19.131676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.259 ms 00:15:17.765 [2024-12-04 14:16:19.131684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.765 [2024-12-04 14:16:19.154279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.765 [2024-12-04 14:16:19.154312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:17.765 [2024-12-04 14:16:19.154329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.544 ms 00:15:17.765 [2024-12-04 14:16:19.154336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.765 [2024-12-04 14:16:19.154667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.765 [2024-12-04 14:16:19.154685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:17.765 [2024-12-04 14:16:19.154695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:15:17.765 [2024-12-04 14:16:19.154705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:17.765 [2024-12-04 14:16:19.217826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:17.765 [2024-12-04 14:16:19.217862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:17.765 [2024-12-04 14:16:19.217875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.086 ms 00:15:17.765 [2024-12-04 14:16:19.217883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.024 [2024-12-04 14:16:19.242910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.024 [2024-12-04 14:16:19.242947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:18.024 [2024-12-04 14:16:19.242960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.952 ms 00:15:18.024 [2024-12-04 14:16:19.242968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.024 [2024-12-04 14:16:19.247451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.024 [2024-12-04 14:16:19.247488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:18.024 [2024-12-04 14:16:19.247503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:15:18.024 [2024-12-04 14:16:19.247511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.024 [2024-12-04 14:16:19.270718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.024 [2024-12-04 14:16:19.270752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:18.024 [2024-12-04 14:16:19.270764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.147 ms 00:15:18.024 [2024-12-04 14:16:19.270771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.024 [2024-12-04 14:16:19.270838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.024 [2024-12-04 14:16:19.270847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:18.024 [2024-12-04 14:16:19.270858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:18.024 [2024-12-04 14:16:19.270866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.024 [2024-12-04 14:16:19.270960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.024 [2024-12-04 14:16:19.270985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:18.024 [2024-12-04 14:16:19.270995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:15:18.024 [2024-12-04 14:16:19.271002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.024 [2024-12-04 14:16:19.271963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:18.024 [2024-12-04 14:16:19.275172] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2787.018 ms, result 0 00:15:18.024 [2024-12-04 14:16:19.276066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:18.024 { 00:15:18.024 "name": "ftl0", 00:15:18.024 "uuid": "a7677b12-c522-4546-9c0d-e96917bc5b1d" 00:15:18.024 } 00:15:18.024 14:16:19 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:18.024 14:16:19 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:18.024 14:16:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:18.024 14:16:19 -- common/autotest_common.sh@899 -- # local i 00:15:18.024 14:16:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:18.024 14:16:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:18.024 14:16:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:18.024 14:16:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:18.282 [ 00:15:18.282 { 00:15:18.282 "name": "ftl0", 00:15:18.282 "aliases": [ 00:15:18.282 "a7677b12-c522-4546-9c0d-e96917bc5b1d" 00:15:18.282 ], 00:15:18.282 "product_name": "FTL disk", 00:15:18.282 "block_size": 4096, 00:15:18.282 "num_blocks": 23592960, 00:15:18.282 "uuid": "a7677b12-c522-4546-9c0d-e96917bc5b1d", 00:15:18.282 "assigned_rate_limits": { 00:15:18.282 "rw_ios_per_sec": 0, 00:15:18.282 "rw_mbytes_per_sec": 0, 00:15:18.282 "r_mbytes_per_sec": 0, 00:15:18.282 "w_mbytes_per_sec": 0 00:15:18.282 }, 00:15:18.282 "claimed": false, 00:15:18.282 "zoned": false, 00:15:18.282 "supported_io_types": { 00:15:18.282 "read": true, 00:15:18.282 "write": true, 00:15:18.282 "unmap": true, 00:15:18.282 "write_zeroes": true, 00:15:18.282 "flush": true, 00:15:18.282 "reset": false, 00:15:18.282 "compare": false, 00:15:18.282 "compare_and_write": false, 00:15:18.282 "abort": false, 00:15:18.282 "nvme_admin": false, 00:15:18.282 "nvme_io": false 00:15:18.282 }, 00:15:18.282 "driver_specific": { 00:15:18.282 "ftl": { 00:15:18.282 "base_bdev": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:18.282 "cache": "nvc0n1p0" 00:15:18.282 } 00:15:18.282 } 00:15:18.282 } 00:15:18.282 ] 00:15:18.282 14:16:19 -- common/autotest_common.sh@905 -- # return 0 00:15:18.282 14:16:19 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:18.283 14:16:19 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:18.542 14:16:19 -- ftl/trim.sh@56 -- # echo ']}' 00:15:18.542 14:16:19 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:18.801 14:16:20 -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:18.801 { 00:15:18.801 "name": "ftl0", 00:15:18.801 "aliases": [ 00:15:18.801 "a7677b12-c522-4546-9c0d-e96917bc5b1d" 00:15:18.801 ], 00:15:18.801 "product_name": "FTL disk", 00:15:18.801 "block_size": 4096, 00:15:18.801 "num_blocks": 23592960, 00:15:18.801 "uuid": "a7677b12-c522-4546-9c0d-e96917bc5b1d", 00:15:18.801 "assigned_rate_limits": { 00:15:18.801 "rw_ios_per_sec": 0, 00:15:18.801 "rw_mbytes_per_sec": 0, 00:15:18.801 "r_mbytes_per_sec": 0, 00:15:18.801 "w_mbytes_per_sec": 0 00:15:18.801 }, 00:15:18.801 "claimed": false, 00:15:18.801 "zoned": false, 00:15:18.801 "supported_io_types": { 00:15:18.801 "read": true, 00:15:18.801 "write": true, 00:15:18.801 "unmap": true, 00:15:18.801 "write_zeroes": true, 00:15:18.801 "flush": true, 00:15:18.801 "reset": false, 00:15:18.801 "compare": false, 00:15:18.801 "compare_and_write": false, 00:15:18.801 "abort": false, 00:15:18.801 "nvme_admin": false, 00:15:18.801 "nvme_io": false 00:15:18.801 }, 00:15:18.801 "driver_specific": { 00:15:18.801 "ftl": { 00:15:18.801 "base_bdev": "73c9738e-8f28-42c5-a204-9ae3fae1ed83", 00:15:18.801 "cache": "nvc0n1p0" 00:15:18.801 } 00:15:18.801 } 00:15:18.801 } 00:15:18.801 ]' 00:15:18.801 14:16:20 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:18.801 14:16:20 -- ftl/trim.sh@60 -- # nb=23592960 00:15:18.801 14:16:20 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:18.801 [2024-12-04 14:16:20.247338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.801 [2024-12-04 14:16:20.247397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:18.801 [2024-12-04 14:16:20.247409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:18.801 [2024-12-04 14:16:20.247418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.801 [2024-12-04 14:16:20.247456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:18.801 [2024-12-04 14:16:20.249679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.801 [2024-12-04 14:16:20.249707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:18.801 [2024-12-04 14:16:20.249721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.205 ms 00:15:18.801 [2024-12-04 14:16:20.249729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.801 [2024-12-04 14:16:20.250211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.801 [2024-12-04 14:16:20.250231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:18.801 [2024-12-04 14:16:20.250242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:15:18.801 [2024-12-04 14:16:20.250250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.801 [2024-12-04 14:16:20.252991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.801 [2024-12-04 14:16:20.253010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:18.801 [2024-12-04 14:16:20.253022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.717 ms 00:15:18.801 [2024-12-04 14:16:20.253030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:18.801 [2024-12-04 14:16:20.258295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:18.801 [2024-12-04 14:16:20.258325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:18.801 [2024-12-04 14:16:20.258334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:15:18.801 [2024-12-04 14:16:20.258340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.277667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.277698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:19.062 [2024-12-04 14:16:20.277709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.234 ms 00:15:19.062 [2024-12-04 14:16:20.277715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.290820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.290852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:19.062 [2024-12-04 14:16:20.290863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.054 ms 00:15:19.062 [2024-12-04 14:16:20.290869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.291037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.291047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:19.062 [2024-12-04 14:16:20.291060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:15:19.062 [2024-12-04 14:16:20.291066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.309506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.309533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:19.062 [2024-12-04 14:16:20.309542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.393 ms 00:15:19.062 [2024-12-04 14:16:20.309548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.327549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.327576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:19.062 [2024-12-04 14:16:20.327585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.948 ms 00:15:19.062 [2024-12-04 14:16:20.327591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.345079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.345113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:19.062 [2024-12-04 14:16:20.345123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.433 ms 00:15:19.062 [2024-12-04 14:16:20.345128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.362574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.062 [2024-12-04 14:16:20.362600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:19.062 [2024-12-04 14:16:20.362611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.350 ms 00:15:19.062 [2024-12-04 14:16:20.362617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.062 [2024-12-04 14:16:20.362669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:19.062 [2024-12-04 14:16:20.362683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:19.062 [2024-12-04 14:16:20.362835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.362991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:19.063 [2024-12-04 14:16:20.363307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:19.064 [2024-12-04 14:16:20.363392] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:19.064 [2024-12-04 14:16:20.363399] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:15:19.064 [2024-12-04 14:16:20.363405] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:19.064 [2024-12-04 14:16:20.363412] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:19.064 [2024-12-04 14:16:20.363417] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:19.064 [2024-12-04 14:16:20.363424] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:19.064 [2024-12-04 14:16:20.363429] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:19.064 [2024-12-04 14:16:20.363437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:19.064 [2024-12-04 14:16:20.363442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:19.064 [2024-12-04 14:16:20.363450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:19.064 [2024-12-04 14:16:20.363454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:19.064 [2024-12-04 14:16:20.363462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.064 [2024-12-04 14:16:20.363469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:19.064 [2024-12-04 14:16:20.363478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:15:19.064 [2024-12-04 14:16:20.363484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.373578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.064 [2024-12-04 14:16:20.373604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:19.064 [2024-12-04 14:16:20.373613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.066 ms 00:15:19.064 [2024-12-04 14:16:20.373619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.373811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.064 [2024-12-04 14:16:20.373826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:19.064 [2024-12-04 14:16:20.373834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:15:19.064 [2024-12-04 14:16:20.373840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.410664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.410694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:19.064 [2024-12-04 14:16:20.410706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.410713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.410798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.410806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:19.064 [2024-12-04 14:16:20.410814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.410821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.410881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.410889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:19.064 [2024-12-04 14:16:20.410898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.410904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.410932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.410940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:19.064 [2024-12-04 14:16:20.410948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.410954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.480127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.480168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:19.064 [2024-12-04 14:16:20.480183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.480189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:19.064 [2024-12-04 14:16:20.504079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:19.064 [2024-12-04 14:16:20.504184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:19.064 [2024-12-04 14:16:20.504256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:19.064 [2024-12-04 14:16:20.504390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:19.064 [2024-12-04 14:16:20.504460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:19.064 [2024-12-04 14:16:20.504525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.064 [2024-12-04 14:16:20.504590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:19.064 [2024-12-04 14:16:20.504610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:19.064 [2024-12-04 14:16:20.504620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:19.064 [2024-12-04 14:16:20.504626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.065 [2024-12-04 14:16:20.504800] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.444 ms, result 0 00:15:19.065 true 00:15:19.065 14:16:20 -- ftl/trim.sh@63 -- # killprocess 71643 00:15:19.065 14:16:20 -- common/autotest_common.sh@936 -- # '[' -z 71643 ']' 00:15:19.065 14:16:20 -- common/autotest_common.sh@940 -- # kill -0 71643 00:15:19.323 14:16:20 -- common/autotest_common.sh@941 -- # uname 00:15:19.323 14:16:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.323 14:16:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71643 00:15:19.323 killing process with pid 71643 00:15:19.323 14:16:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.323 14:16:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.323 14:16:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71643' 00:15:19.323 14:16:20 -- common/autotest_common.sh@955 -- # kill 71643 00:15:19.323 14:16:20 -- common/autotest_common.sh@960 -- # wait 71643 00:15:24.620 14:16:25 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:15:25.562 65536+0 records in 00:15:25.562 65536+0 records out 00:15:25.562 268435456 bytes (268 MB, 256 MiB) copied, 1.06928 s, 251 MB/s 00:15:25.562 14:16:26 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:25.824 [2024-12-04 14:16:27.037745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:25.824 [2024-12-04 14:16:27.037855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:15:25.824 [2024-12-04 14:16:27.185009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.085 [2024-12-04 14:16:27.364829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.348 [2024-12-04 14:16:27.594079] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:26.348 [2024-12-04 14:16:27.594154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:26.348 [2024-12-04 14:16:27.743372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.743418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:26.348 [2024-12-04 14:16:27.743430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:26.348 [2024-12-04 14:16:27.743437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.745630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.745663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:26.348 [2024-12-04 14:16:27.745671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:15:26.348 [2024-12-04 14:16:27.745677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.745737] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:26.348 [2024-12-04 14:16:27.746319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:26.348 [2024-12-04 14:16:27.746337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.746344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:26.348 [2024-12-04 14:16:27.746352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:15:26.348 [2024-12-04 14:16:27.746358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.747668] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:26.348 [2024-12-04 14:16:27.758278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.758306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:26.348 [2024-12-04 14:16:27.758315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.611 ms 00:15:26.348 [2024-12-04 14:16:27.758322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.758399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.758408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:26.348 [2024-12-04 14:16:27.758415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:26.348 [2024-12-04 14:16:27.758421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.764853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.764879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:26.348 [2024-12-04 14:16:27.764886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.399 ms 00:15:26.348 [2024-12-04 14:16:27.764896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.764979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.764987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:26.348 [2024-12-04 14:16:27.764994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:26.348 [2024-12-04 14:16:27.765000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.765021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.765029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:26.348 [2024-12-04 14:16:27.765036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:26.348 [2024-12-04 14:16:27.765042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.765068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:26.348 [2024-12-04 14:16:27.768297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.768321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:26.348 [2024-12-04 14:16:27.768329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:15:26.348 [2024-12-04 14:16:27.768337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.768380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.768387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:26.348 [2024-12-04 14:16:27.768394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:26.348 [2024-12-04 14:16:27.768399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.348 [2024-12-04 14:16:27.768415] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:26.348 [2024-12-04 14:16:27.768430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:26.348 [2024-12-04 14:16:27.768457] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:26.348 [2024-12-04 14:16:27.768472] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:26.348 [2024-12-04 14:16:27.768533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:26.348 [2024-12-04 14:16:27.768541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:26.348 [2024-12-04 14:16:27.768549] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:26.348 [2024-12-04 14:16:27.768557] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:26.348 [2024-12-04 14:16:27.768564] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:26.348 [2024-12-04 14:16:27.768570] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:26.348 [2024-12-04 14:16:27.768577] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:26.348 [2024-12-04 14:16:27.768582] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:26.348 [2024-12-04 14:16:27.768591] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:26.348 [2024-12-04 14:16:27.768598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.348 [2024-12-04 14:16:27.768605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:26.348 [2024-12-04 14:16:27.768610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:15:26.348 [2024-12-04 14:16:27.768616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.349 [2024-12-04 14:16:27.768667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.349 [2024-12-04 14:16:27.768674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:26.349 [2024-12-04 14:16:27.768680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:26.349 [2024-12-04 14:16:27.768685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.349 [2024-12-04 14:16:27.768743] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:26.349 [2024-12-04 14:16:27.768750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:26.349 [2024-12-04 14:16:27.768756] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:26.349 [2024-12-04 14:16:27.768774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768790] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:26.349 [2024-12-04 14:16:27.768796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.349 [2024-12-04 14:16:27.768807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:26.349 [2024-12-04 14:16:27.768813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:26.349 [2024-12-04 14:16:27.768818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:26.349 [2024-12-04 14:16:27.768825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:26.349 [2024-12-04 14:16:27.768835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:26.349 [2024-12-04 14:16:27.768841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768846] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:26.349 [2024-12-04 14:16:27.768852] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:26.349 [2024-12-04 14:16:27.768857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:26.349 [2024-12-04 14:16:27.768868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:26.349 [2024-12-04 14:16:27.768873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:26.349 [2024-12-04 14:16:27.768885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:26.349 [2024-12-04 14:16:27.768900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:26.349 [2024-12-04 14:16:27.768916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:26.349 [2024-12-04 14:16:27.768932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:26.349 [2024-12-04 14:16:27.768947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:26.349 [2024-12-04 14:16:27.768952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.349 [2024-12-04 14:16:27.768957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:26.349 [2024-12-04 14:16:27.768965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:26.349 [2024-12-04 14:16:27.768970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:26.349 [2024-12-04 14:16:27.768975] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:26.349 [2024-12-04 14:16:27.768980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:26.349 [2024-12-04 14:16:27.768986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:26.349 [2024-12-04 14:16:27.768995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:26.349 [2024-12-04 14:16:27.769000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:26.349 [2024-12-04 14:16:27.769006] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:26.349 [2024-12-04 14:16:27.769011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:26.349 [2024-12-04 14:16:27.769016] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:26.349 [2024-12-04 14:16:27.769021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:26.349 [2024-12-04 14:16:27.769026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:26.349 [2024-12-04 14:16:27.769032] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:26.349 [2024-12-04 14:16:27.769040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.349 [2024-12-04 14:16:27.769047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:26.349 [2024-12-04 14:16:27.769052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:26.349 [2024-12-04 14:16:27.769058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:26.349 [2024-12-04 14:16:27.769064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:26.349 [2024-12-04 14:16:27.769070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:26.349 [2024-12-04 14:16:27.769076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:26.349 [2024-12-04 14:16:27.769081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:26.349 [2024-12-04 14:16:27.769099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:26.349 [2024-12-04 14:16:27.769105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:26.349 [2024-12-04 14:16:27.769111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:26.349 [2024-12-04 14:16:27.769117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:26.349 [2024-12-04 14:16:27.769122] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:26.349 [2024-12-04 14:16:27.769128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:26.349 [2024-12-04 14:16:27.769134] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:26.349 [2024-12-04 14:16:27.769144] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:26.349 [2024-12-04 14:16:27.769150] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:26.349 [2024-12-04 14:16:27.769156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:26.349 [2024-12-04 14:16:27.769162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:26.349 [2024-12-04 14:16:27.769170] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:26.349 [2024-12-04 14:16:27.769177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.349 [2024-12-04 14:16:27.769183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:26.349 [2024-12-04 14:16:27.769190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:15:26.350 [2024-12-04 14:16:27.769195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.350 [2024-12-04 14:16:27.783219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.350 [2024-12-04 14:16:27.783246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:26.350 [2024-12-04 14:16:27.783256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:15:26.350 [2024-12-04 14:16:27.783264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.350 [2024-12-04 14:16:27.783359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.350 [2024-12-04 14:16:27.783368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:26.350 [2024-12-04 14:16:27.783375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:26.350 [2024-12-04 14:16:27.783382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.610 [2024-12-04 14:16:27.825569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.610 [2024-12-04 14:16:27.825602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:26.610 [2024-12-04 14:16:27.825613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.168 ms 00:15:26.610 [2024-12-04 14:16:27.825620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.610 [2024-12-04 14:16:27.825682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.610 [2024-12-04 14:16:27.825690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:26.610 [2024-12-04 14:16:27.825701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:26.610 [2024-12-04 14:16:27.825706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.610 [2024-12-04 14:16:27.826131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.610 [2024-12-04 14:16:27.826151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:26.610 [2024-12-04 14:16:27.826158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:15:26.610 [2024-12-04 14:16:27.826164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.610 [2024-12-04 14:16:27.826269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.610 [2024-12-04 14:16:27.826277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:26.610 [2024-12-04 14:16:27.826284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:15:26.610 [2024-12-04 14:16:27.826290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.839448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.839473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:26.611 [2024-12-04 14:16:27.839481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.138 ms 00:15:26.611 [2024-12-04 14:16:27.839490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.850385] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:26.611 [2024-12-04 14:16:27.850416] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:26.611 [2024-12-04 14:16:27.850425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.850432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:26.611 [2024-12-04 14:16:27.850439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.853 ms 00:15:26.611 [2024-12-04 14:16:27.850445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.869762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.869790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:26.611 [2024-12-04 14:16:27.869803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.260 ms 00:15:26.611 [2024-12-04 14:16:27.869810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.879486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.879511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:26.611 [2024-12-04 14:16:27.879519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.615 ms 00:15:26.611 [2024-12-04 14:16:27.879531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.888854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.888878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:26.611 [2024-12-04 14:16:27.888886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.280 ms 00:15:26.611 [2024-12-04 14:16:27.888892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.889191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.889202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:26.611 [2024-12-04 14:16:27.889209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:15:26.611 [2024-12-04 14:16:27.889216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.938347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.938389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:26.611 [2024-12-04 14:16:27.938400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.111 ms 00:15:26.611 [2024-12-04 14:16:27.938407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.946619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:26.611 [2024-12-04 14:16:27.961533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.961567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:26.611 [2024-12-04 14:16:27.961578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.043 ms 00:15:26.611 [2024-12-04 14:16:27.961585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.961658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.961666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:26.611 [2024-12-04 14:16:27.961676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:26.611 [2024-12-04 14:16:27.961683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.961744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.961752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:26.611 [2024-12-04 14:16:27.961759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:15:26.611 [2024-12-04 14:16:27.961765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.962816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.962842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:26.611 [2024-12-04 14:16:27.962849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:15:26.611 [2024-12-04 14:16:27.962856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.962886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.962896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:26.611 [2024-12-04 14:16:27.962903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:26.611 [2024-12-04 14:16:27.962910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.962941] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:26.611 [2024-12-04 14:16:27.962950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.962956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:26.611 [2024-12-04 14:16:27.962963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:26.611 [2024-12-04 14:16:27.962969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.982030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.982059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:26.611 [2024-12-04 14:16:27.982068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.042 ms 00:15:26.611 [2024-12-04 14:16:27.982075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.982168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.611 [2024-12-04 14:16:27.982178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:26.611 [2024-12-04 14:16:27.982186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:26.611 [2024-12-04 14:16:27.982193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.611 [2024-12-04 14:16:27.983056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:26.611 [2024-12-04 14:16:27.985551] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 239.413 ms, result 0 00:15:26.611 [2024-12-04 14:16:27.986511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:26.611 [2024-12-04 14:16:27.997617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:27.553  [2024-12-04T14:16:30.400Z] Copying: 24/256 [MB] (24 MBps) [2024-12-04T14:16:31.339Z] Copying: 45/256 [MB] (21 MBps) [2024-12-04T14:16:32.281Z] Copying: 68/256 [MB] (23 MBps) [2024-12-04T14:16:33.227Z] Copying: 92/256 [MB] (23 MBps) [2024-12-04T14:16:34.170Z] Copying: 109/256 [MB] (17 MBps) [2024-12-04T14:16:35.115Z] Copying: 129/256 [MB] (20 MBps) [2024-12-04T14:16:36.060Z] Copying: 149/256 [MB] (19 MBps) [2024-12-04T14:16:37.002Z] Copying: 170/256 [MB] (21 MBps) [2024-12-04T14:16:38.389Z] Copying: 185/256 [MB] (14 MBps) [2024-12-04T14:16:39.334Z] Copying: 195/256 [MB] (10 MBps) [2024-12-04T14:16:40.280Z] Copying: 206/256 [MB] (10 MBps) [2024-12-04T14:16:41.234Z] Copying: 217/256 [MB] (11 MBps) [2024-12-04T14:16:42.181Z] Copying: 228/256 [MB] (10 MBps) [2024-12-04T14:16:43.124Z] Copying: 239/256 [MB] (11 MBps) [2024-12-04T14:16:43.124Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-04 14:16:42.860047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:41.659 [2024-12-04 14:16:42.867145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.659 [2024-12-04 14:16:42.867180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:41.659 [2024-12-04 14:16:42.867191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:41.659 [2024-12-04 14:16:42.867197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.659 [2024-12-04 14:16:42.867215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:41.659 [2024-12-04 14:16:42.869270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.659 [2024-12-04 14:16:42.869291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:41.659 [2024-12-04 14:16:42.869299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:15:41.659 [2024-12-04 14:16:42.869306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.659 [2024-12-04 14:16:42.870859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.659 [2024-12-04 14:16:42.870886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:41.659 [2024-12-04 14:16:42.870894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:15:41.659 [2024-12-04 14:16:42.870905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.659 [2024-12-04 14:16:42.875870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.659 [2024-12-04 14:16:42.875895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:41.659 [2024-12-04 14:16:42.875902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.950 ms 00:15:41.659 [2024-12-04 14:16:42.875908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.659 [2024-12-04 14:16:42.881281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.659 [2024-12-04 14:16:42.881306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:41.660 [2024-12-04 14:16:42.881314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.340 ms 00:15:41.660 [2024-12-04 14:16:42.881320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.898960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.898986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:41.660 [2024-12-04 14:16:42.898993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.590 ms 00:15:41.660 [2024-12-04 14:16:42.898998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.910893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.910922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:41.660 [2024-12-04 14:16:42.910931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.859 ms 00:15:41.660 [2024-12-04 14:16:42.910938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.911042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.911050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:41.660 [2024-12-04 14:16:42.911056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:41.660 [2024-12-04 14:16:42.911061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.929001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.929028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:41.660 [2024-12-04 14:16:42.929036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.927 ms 00:15:41.660 [2024-12-04 14:16:42.929042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.946764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.946790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:41.660 [2024-12-04 14:16:42.946797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.679 ms 00:15:41.660 [2024-12-04 14:16:42.946802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.964035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.964060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:41.660 [2024-12-04 14:16:42.964067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.198 ms 00:15:41.660 [2024-12-04 14:16:42.964073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.981506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.660 [2024-12-04 14:16:42.981533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:41.660 [2024-12-04 14:16:42.981541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.370 ms 00:15:41.660 [2024-12-04 14:16:42.981547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.660 [2024-12-04 14:16:42.981582] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:41.660 [2024-12-04 14:16:42.981594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:41.660 [2024-12-04 14:16:42.981959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.981972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.981978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.981984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.981990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.981996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:41.661 [2024-12-04 14:16:42.982219] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:41.661 [2024-12-04 14:16:42.982225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:15:41.661 [2024-12-04 14:16:42.982232] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:41.661 [2024-12-04 14:16:42.982237] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:41.661 [2024-12-04 14:16:42.982242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:41.661 [2024-12-04 14:16:42.982248] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:41.661 [2024-12-04 14:16:42.982254] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:41.661 [2024-12-04 14:16:42.982262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:41.661 [2024-12-04 14:16:42.982268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:41.661 [2024-12-04 14:16:42.982273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:41.661 [2024-12-04 14:16:42.982277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:41.661 [2024-12-04 14:16:42.982283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.661 [2024-12-04 14:16:42.982289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:41.661 [2024-12-04 14:16:42.982295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:15:41.661 [2024-12-04 14:16:42.982301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:42.991928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.661 [2024-12-04 14:16:42.991953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:41.661 [2024-12-04 14:16:42.991961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.613 ms 00:15:41.661 [2024-12-04 14:16:42.991971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:42.992150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.661 [2024-12-04 14:16:42.992165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:41.661 [2024-12-04 14:16:42.992172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:15:41.661 [2024-12-04 14:16:42.992178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.022037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.022071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:41.661 [2024-12-04 14:16:43.022083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.022111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.022189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.022197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:41.661 [2024-12-04 14:16:43.022203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.022209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.022243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.022251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:41.661 [2024-12-04 14:16:43.022257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.022265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.022280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.022286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:41.661 [2024-12-04 14:16:43.022292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.022298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.080351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.080390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:41.661 [2024-12-04 14:16:43.080402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.080409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.103108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.103140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:41.661 [2024-12-04 14:16:43.103149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.103156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.103203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.103210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:41.661 [2024-12-04 14:16:43.103216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.103221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.103248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.103254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:41.661 [2024-12-04 14:16:43.103260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.103265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.103333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.103341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:41.661 [2024-12-04 14:16:43.103347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.103353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.103376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.103385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:41.661 [2024-12-04 14:16:43.103391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.661 [2024-12-04 14:16:43.103396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.661 [2024-12-04 14:16:43.103425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.661 [2024-12-04 14:16:43.103431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:41.661 [2024-12-04 14:16:43.103437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.662 [2024-12-04 14:16:43.103443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.662 [2024-12-04 14:16:43.103479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:41.662 [2024-12-04 14:16:43.103488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:41.662 [2024-12-04 14:16:43.103494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:41.662 [2024-12-04 14:16:43.103500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.662 [2024-12-04 14:16:43.103608] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 236.455 ms, result 0 00:15:42.598 00:15:42.598 00:15:42.598 14:16:43 -- ftl/trim.sh@72 -- # svcpid=72010 00:15:42.598 14:16:43 -- ftl/trim.sh@73 -- # waitforlisten 72010 00:15:42.598 14:16:43 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:15:42.598 14:16:43 -- common/autotest_common.sh@829 -- # '[' -z 72010 ']' 00:15:42.598 14:16:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.598 14:16:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.598 14:16:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.598 14:16:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.598 14:16:43 -- common/autotest_common.sh@10 -- # set +x 00:15:42.598 [2024-12-04 14:16:43.995498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:42.598 [2024-12-04 14:16:43.995607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72010 ] 00:15:42.856 [2024-12-04 14:16:44.141072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.856 [2024-12-04 14:16:44.277868] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.856 [2024-12-04 14:16:44.278022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.422 14:16:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.422 14:16:44 -- common/autotest_common.sh@862 -- # return 0 00:15:43.422 14:16:44 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:15:43.681 [2024-12-04 14:16:44.948522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:43.681 [2024-12-04 14:16:44.948568] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:43.681 [2024-12-04 14:16:45.105266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.681 [2024-12-04 14:16:45.105300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:43.681 [2024-12-04 14:16:45.105311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:43.681 [2024-12-04 14:16:45.105317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.681 [2024-12-04 14:16:45.107349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.681 [2024-12-04 14:16:45.107379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:43.681 [2024-12-04 14:16:45.107387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:15:43.681 [2024-12-04 14:16:45.107393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.681 [2024-12-04 14:16:45.107451] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:43.681 [2024-12-04 14:16:45.107998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:43.681 [2024-12-04 14:16:45.108016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.108022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:43.682 [2024-12-04 14:16:45.108030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:15:43.682 [2024-12-04 14:16:45.108036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.109330] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:43.682 [2024-12-04 14:16:45.119059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.119098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:43.682 [2024-12-04 14:16:45.119108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.733 ms 00:15:43.682 [2024-12-04 14:16:45.119116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.119180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.119190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:43.682 [2024-12-04 14:16:45.119197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:43.682 [2024-12-04 14:16:45.119204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.123477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.123505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:43.682 [2024-12-04 14:16:45.123512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.235 ms 00:15:43.682 [2024-12-04 14:16:45.123519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.123587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.123596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:43.682 [2024-12-04 14:16:45.123603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:43.682 [2024-12-04 14:16:45.123610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.123630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.123638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:43.682 [2024-12-04 14:16:45.123644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:43.682 [2024-12-04 14:16:45.123652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.123674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:43.682 [2024-12-04 14:16:45.126445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.126468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:43.682 [2024-12-04 14:16:45.126476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:15:43.682 [2024-12-04 14:16:45.126481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.126513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.126519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:43.682 [2024-12-04 14:16:45.126527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:43.682 [2024-12-04 14:16:45.126534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.126550] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:43.682 [2024-12-04 14:16:45.126564] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:43.682 [2024-12-04 14:16:45.126591] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:43.682 [2024-12-04 14:16:45.126603] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:43.682 [2024-12-04 14:16:45.126660] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:43.682 [2024-12-04 14:16:45.126667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:43.682 [2024-12-04 14:16:45.126678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:43.682 [2024-12-04 14:16:45.126686] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:43.682 [2024-12-04 14:16:45.126694] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:43.682 [2024-12-04 14:16:45.126700] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:43.682 [2024-12-04 14:16:45.126707] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:43.682 [2024-12-04 14:16:45.126712] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:43.682 [2024-12-04 14:16:45.126720] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:43.682 [2024-12-04 14:16:45.126726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.126732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:43.682 [2024-12-04 14:16:45.126738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:15:43.682 [2024-12-04 14:16:45.126745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.126795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.682 [2024-12-04 14:16:45.126803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:43.682 [2024-12-04 14:16:45.126808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:43.682 [2024-12-04 14:16:45.126814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.682 [2024-12-04 14:16:45.126871] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:43.682 [2024-12-04 14:16:45.126879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:43.682 [2024-12-04 14:16:45.126885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:43.682 [2024-12-04 14:16:45.126892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:43.682 [2024-12-04 14:16:45.126897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:43.682 [2024-12-04 14:16:45.126903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:43.682 [2024-12-04 14:16:45.126909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:43.682 [2024-12-04 14:16:45.126918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:43.682 [2024-12-04 14:16:45.126924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:43.682 [2024-12-04 14:16:45.126930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:43.682 [2024-12-04 14:16:45.126935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:43.682 [2024-12-04 14:16:45.126941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:43.682 [2024-12-04 14:16:45.126946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:43.682 [2024-12-04 14:16:45.126953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:43.682 [2024-12-04 14:16:45.126958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:43.682 [2024-12-04 14:16:45.126964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:43.682 [2024-12-04 14:16:45.126969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:43.682 [2024-12-04 14:16:45.126975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:43.682 [2024-12-04 14:16:45.126979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:43.682 [2024-12-04 14:16:45.126986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:43.682 [2024-12-04 14:16:45.126991] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:43.682 [2024-12-04 14:16:45.126997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:43.682 [2024-12-04 14:16:45.127002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:43.682 [2024-12-04 14:16:45.127009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:43.682 [2024-12-04 14:16:45.127014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:43.682 [2024-12-04 14:16:45.127025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:43.682 [2024-12-04 14:16:45.127030] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:43.682 [2024-12-04 14:16:45.127035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:43.682 [2024-12-04 14:16:45.127040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:43.683 [2024-12-04 14:16:45.127046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:43.683 [2024-12-04 14:16:45.127050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:43.683 [2024-12-04 14:16:45.127057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:43.683 [2024-12-04 14:16:45.127062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:43.683 [2024-12-04 14:16:45.127068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:43.683 [2024-12-04 14:16:45.127073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:43.683 [2024-12-04 14:16:45.127079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:43.683 [2024-12-04 14:16:45.127094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:43.683 [2024-12-04 14:16:45.127100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:43.683 [2024-12-04 14:16:45.127105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:43.683 [2024-12-04 14:16:45.127113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:43.683 [2024-12-04 14:16:45.127118] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:43.683 [2024-12-04 14:16:45.127126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:43.683 [2024-12-04 14:16:45.127132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:43.683 [2024-12-04 14:16:45.127138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:43.683 [2024-12-04 14:16:45.127144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:43.683 [2024-12-04 14:16:45.127150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:43.683 [2024-12-04 14:16:45.127156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:43.683 [2024-12-04 14:16:45.127162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:43.683 [2024-12-04 14:16:45.127167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:43.683 [2024-12-04 14:16:45.127173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:43.683 [2024-12-04 14:16:45.127179] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:43.683 [2024-12-04 14:16:45.127187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:43.683 [2024-12-04 14:16:45.127193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:43.683 [2024-12-04 14:16:45.127200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:43.683 [2024-12-04 14:16:45.127205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:43.683 [2024-12-04 14:16:45.127214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:43.683 [2024-12-04 14:16:45.127219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:43.683 [2024-12-04 14:16:45.127226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:43.683 [2024-12-04 14:16:45.127231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:43.683 [2024-12-04 14:16:45.127237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:43.683 [2024-12-04 14:16:45.127242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:43.683 [2024-12-04 14:16:45.127249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:43.683 [2024-12-04 14:16:45.127254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:43.683 [2024-12-04 14:16:45.127260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:43.683 [2024-12-04 14:16:45.127266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:43.683 [2024-12-04 14:16:45.127272] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:43.683 [2024-12-04 14:16:45.127278] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:43.683 [2024-12-04 14:16:45.127285] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:43.683 [2024-12-04 14:16:45.127291] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:43.683 [2024-12-04 14:16:45.127297] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:43.683 [2024-12-04 14:16:45.127304] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:43.683 [2024-12-04 14:16:45.127312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.683 [2024-12-04 14:16:45.127317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:43.683 [2024-12-04 14:16:45.127324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:15:43.683 [2024-12-04 14:16:45.127329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.683 [2024-12-04 14:16:45.139216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.683 [2024-12-04 14:16:45.139241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:43.683 [2024-12-04 14:16:45.139252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.850 ms 00:15:43.683 [2024-12-04 14:16:45.139259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.683 [2024-12-04 14:16:45.139346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.683 [2024-12-04 14:16:45.139353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:43.683 [2024-12-04 14:16:45.139360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:43.683 [2024-12-04 14:16:45.139366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.163441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.163467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:43.943 [2024-12-04 14:16:45.163477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.059 ms 00:15:43.943 [2024-12-04 14:16:45.163484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.163528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.163537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:43.943 [2024-12-04 14:16:45.163545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:43.943 [2024-12-04 14:16:45.163551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.163830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.163841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:43.943 [2024-12-04 14:16:45.163850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:15:43.943 [2024-12-04 14:16:45.163855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.163945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.163960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:43.943 [2024-12-04 14:16:45.163970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:43.943 [2024-12-04 14:16:45.163975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.175804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.175828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:43.943 [2024-12-04 14:16:45.175838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.812 ms 00:15:43.943 [2024-12-04 14:16:45.175844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.185442] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:43.943 [2024-12-04 14:16:45.185477] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:43.943 [2024-12-04 14:16:45.185487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.185493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:43.943 [2024-12-04 14:16:45.185501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.566 ms 00:15:43.943 [2024-12-04 14:16:45.185506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.203994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.204022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:43.943 [2024-12-04 14:16:45.204033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.443 ms 00:15:43.943 [2024-12-04 14:16:45.204039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.213216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.213245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:43.943 [2024-12-04 14:16:45.213254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.109 ms 00:15:43.943 [2024-12-04 14:16:45.213259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.222009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.222033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:43.943 [2024-12-04 14:16:45.222043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.709 ms 00:15:43.943 [2024-12-04 14:16:45.222049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.222336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.222351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:43.943 [2024-12-04 14:16:45.222361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:15:43.943 [2024-12-04 14:16:45.222367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.267213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.267245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:43.943 [2024-12-04 14:16:45.267257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.828 ms 00:15:43.943 [2024-12-04 14:16:45.267263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.275167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:43.943 [2024-12-04 14:16:45.286389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.286422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:43.943 [2024-12-04 14:16:45.286431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.067 ms 00:15:43.943 [2024-12-04 14:16:45.286438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.286488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.286499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:43.943 [2024-12-04 14:16:45.286505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:43.943 [2024-12-04 14:16:45.286514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.286550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.286558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:43.943 [2024-12-04 14:16:45.286564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:43.943 [2024-12-04 14:16:45.286571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.287493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.287516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:43.943 [2024-12-04 14:16:45.287523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:15:43.943 [2024-12-04 14:16:45.287531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.287553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.287563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:43.943 [2024-12-04 14:16:45.287568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:43.943 [2024-12-04 14:16:45.287575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.287602] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:43.943 [2024-12-04 14:16:45.287613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.287619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:43.943 [2024-12-04 14:16:45.287626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:43.943 [2024-12-04 14:16:45.287631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.305898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.305924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:43.943 [2024-12-04 14:16:45.305935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.248 ms 00:15:43.943 [2024-12-04 14:16:45.305941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.306009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.943 [2024-12-04 14:16:45.306017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:43.943 [2024-12-04 14:16:45.306025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:43.943 [2024-12-04 14:16:45.306032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.943 [2024-12-04 14:16:45.306964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:43.943 [2024-12-04 14:16:45.309419] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 201.477 ms, result 0 00:15:43.943 [2024-12-04 14:16:45.311025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:43.943 Some configs were skipped because the RPC state that can call them passed over. 00:15:43.943 14:16:45 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:15:44.203 [2024-12-04 14:16:45.556667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.203 [2024-12-04 14:16:45.556714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:44.203 [2024-12-04 14:16:45.556726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.982 ms 00:15:44.203 [2024-12-04 14:16:45.556736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.203 [2024-12-04 14:16:45.556773] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 25.088 ms, result 0 00:15:44.203 true 00:15:44.203 14:16:45 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:15:44.463 [2024-12-04 14:16:45.769307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.463 [2024-12-04 14:16:45.769344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:15:44.463 [2024-12-04 14:16:45.769357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.289 ms 00:15:44.463 [2024-12-04 14:16:45.769364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.463 [2024-12-04 14:16:45.769400] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 24.383 ms, result 0 00:15:44.463 true 00:15:44.463 14:16:45 -- ftl/trim.sh@81 -- # killprocess 72010 00:15:44.463 14:16:45 -- common/autotest_common.sh@936 -- # '[' -z 72010 ']' 00:15:44.463 14:16:45 -- common/autotest_common.sh@940 -- # kill -0 72010 00:15:44.463 14:16:45 -- common/autotest_common.sh@941 -- # uname 00:15:44.463 14:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.463 14:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72010 00:15:44.463 14:16:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.463 killing process with pid 72010 00:15:44.463 14:16:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.463 14:16:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72010' 00:15:44.463 14:16:45 -- common/autotest_common.sh@955 -- # kill 72010 00:15:44.463 14:16:45 -- common/autotest_common.sh@960 -- # wait 72010 00:15:45.405 [2024-12-04 14:16:46.502446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.502503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:45.405 [2024-12-04 14:16:46.502517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:45.405 [2024-12-04 14:16:46.502526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.502551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:45.405 [2024-12-04 14:16:46.505032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.505059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:45.405 [2024-12-04 14:16:46.505073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:15:45.405 [2024-12-04 14:16:46.505080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.505390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.505400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:45.405 [2024-12-04 14:16:46.505410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:15:45.405 [2024-12-04 14:16:46.505417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.509989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.510019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:45.405 [2024-12-04 14:16:46.510032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:15:45.405 [2024-12-04 14:16:46.510040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.517052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.517094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:45.405 [2024-12-04 14:16:46.517105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:15:45.405 [2024-12-04 14:16:46.517112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.527241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.527271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:45.405 [2024-12-04 14:16:46.527285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.074 ms 00:15:45.405 [2024-12-04 14:16:46.527291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.534900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.534940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:45.405 [2024-12-04 14:16:46.534951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.571 ms 00:15:45.405 [2024-12-04 14:16:46.534958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.535104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.535114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:45.405 [2024-12-04 14:16:46.535125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:15:45.405 [2024-12-04 14:16:46.535132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.545603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.545632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:45.405 [2024-12-04 14:16:46.545643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.450 ms 00:15:45.405 [2024-12-04 14:16:46.545650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.555978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.556007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:45.405 [2024-12-04 14:16:46.556023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.292 ms 00:15:45.405 [2024-12-04 14:16:46.556029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.565600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.565627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:45.405 [2024-12-04 14:16:46.565638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.535 ms 00:15:45.405 [2024-12-04 14:16:46.565644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.576002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.405 [2024-12-04 14:16:46.576031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:45.405 [2024-12-04 14:16:46.576042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.293 ms 00:15:45.405 [2024-12-04 14:16:46.576049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.405 [2024-12-04 14:16:46.576083] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:45.405 [2024-12-04 14:16:46.576106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:45.405 [2024-12-04 14:16:46.576331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:45.406 [2024-12-04 14:16:46.576927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:45.406 [2024-12-04 14:16:46.576937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:15:45.406 [2024-12-04 14:16:46.576945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:45.406 [2024-12-04 14:16:46.576953] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:45.406 [2024-12-04 14:16:46.576960] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:45.406 [2024-12-04 14:16:46.576969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:45.406 [2024-12-04 14:16:46.576976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:45.406 [2024-12-04 14:16:46.576985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:45.406 [2024-12-04 14:16:46.576992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:45.406 [2024-12-04 14:16:46.577000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:45.406 [2024-12-04 14:16:46.577006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:45.406 [2024-12-04 14:16:46.577015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.406 [2024-12-04 14:16:46.577022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:45.406 [2024-12-04 14:16:46.577032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:15:45.406 [2024-12-04 14:16:46.577040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.589203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.406 [2024-12-04 14:16:46.589232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:45.406 [2024-12-04 14:16:46.589245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.133 ms 00:15:45.406 [2024-12-04 14:16:46.589252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.589459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:45.406 [2024-12-04 14:16:46.589468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:45.406 [2024-12-04 14:16:46.589480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:15:45.406 [2024-12-04 14:16:46.589486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.633729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.633762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:45.406 [2024-12-04 14:16:46.633773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.633780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.633853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.633862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:45.406 [2024-12-04 14:16:46.633873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.633879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.633919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.633928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:45.406 [2024-12-04 14:16:46.633939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.633946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.633965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.633973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:45.406 [2024-12-04 14:16:46.633981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.633990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.711154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.711188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:45.406 [2024-12-04 14:16:46.711201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.711209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.741038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.741071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:45.406 [2024-12-04 14:16:46.741084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.741100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.741147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.741156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:45.406 [2024-12-04 14:16:46.741167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.741174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.741204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.741211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:45.406 [2024-12-04 14:16:46.741221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.406 [2024-12-04 14:16:46.741228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.406 [2024-12-04 14:16:46.741315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.406 [2024-12-04 14:16:46.741324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:45.407 [2024-12-04 14:16:46.741333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.407 [2024-12-04 14:16:46.741340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.407 [2024-12-04 14:16:46.741371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.407 [2024-12-04 14:16:46.741380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:45.407 [2024-12-04 14:16:46.741389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.407 [2024-12-04 14:16:46.741396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.407 [2024-12-04 14:16:46.741433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.407 [2024-12-04 14:16:46.741441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:45.407 [2024-12-04 14:16:46.741452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.407 [2024-12-04 14:16:46.741459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.407 [2024-12-04 14:16:46.741503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:45.407 [2024-12-04 14:16:46.741512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:45.407 [2024-12-04 14:16:46.741521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:45.407 [2024-12-04 14:16:46.741528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:45.407 [2024-12-04 14:16:46.741653] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 239.189 ms, result 0 00:15:46.393 14:16:47 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:46.393 14:16:47 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:46.393 [2024-12-04 14:16:47.551174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:46.393 [2024-12-04 14:16:47.551282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72062 ] 00:15:46.393 [2024-12-04 14:16:47.700121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.393 [2024-12-04 14:16:47.840345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.651 [2024-12-04 14:16:48.045642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:46.652 [2024-12-04 14:16:48.045690] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:46.911 [2024-12-04 14:16:48.189150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.911 [2024-12-04 14:16:48.189186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:46.911 [2024-12-04 14:16:48.189197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:46.911 [2024-12-04 14:16:48.189202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.911 [2024-12-04 14:16:48.191262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.911 [2024-12-04 14:16:48.191291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:46.911 [2024-12-04 14:16:48.191298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:15:46.911 [2024-12-04 14:16:48.191304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.911 [2024-12-04 14:16:48.191362] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:46.911 [2024-12-04 14:16:48.191915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:46.911 [2024-12-04 14:16:48.191925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.911 [2024-12-04 14:16:48.191931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:46.911 [2024-12-04 14:16:48.191937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:15:46.911 [2024-12-04 14:16:48.191943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.911 [2024-12-04 14:16:48.193014] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:46.911 [2024-12-04 14:16:48.202668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.911 [2024-12-04 14:16:48.202693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:46.911 [2024-12-04 14:16:48.202701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.656 ms 00:15:46.912 [2024-12-04 14:16:48.202707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.202767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.202775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:46.912 [2024-12-04 14:16:48.202781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:15:46.912 [2024-12-04 14:16:48.202787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.207160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.207182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:46.912 [2024-12-04 14:16:48.207188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.344 ms 00:15:46.912 [2024-12-04 14:16:48.207197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.207284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.207292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:46.912 [2024-12-04 14:16:48.207302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:46.912 [2024-12-04 14:16:48.207307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.207326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.207332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:46.912 [2024-12-04 14:16:48.207338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:46.912 [2024-12-04 14:16:48.207343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.207364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:46.912 [2024-12-04 14:16:48.210112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.210131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:46.912 [2024-12-04 14:16:48.210138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:15:46.912 [2024-12-04 14:16:48.210146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.210176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.210182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:46.912 [2024-12-04 14:16:48.210188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:46.912 [2024-12-04 14:16:48.210193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.210206] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:46.912 [2024-12-04 14:16:48.210220] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:15:46.912 [2024-12-04 14:16:48.210244] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:46.912 [2024-12-04 14:16:48.210257] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:15:46.912 [2024-12-04 14:16:48.210313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:46.912 [2024-12-04 14:16:48.210321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:46.912 [2024-12-04 14:16:48.210328] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:46.912 [2024-12-04 14:16:48.210335] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210342] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210348] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:46.912 [2024-12-04 14:16:48.210354] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:46.912 [2024-12-04 14:16:48.210359] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:46.912 [2024-12-04 14:16:48.210366] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:46.912 [2024-12-04 14:16:48.210372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.210377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:46.912 [2024-12-04 14:16:48.210383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:15:46.912 [2024-12-04 14:16:48.210388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.210437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.912 [2024-12-04 14:16:48.210443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:46.912 [2024-12-04 14:16:48.210449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:46.912 [2024-12-04 14:16:48.210454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.912 [2024-12-04 14:16:48.210509] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:46.912 [2024-12-04 14:16:48.210516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:46.912 [2024-12-04 14:16:48.210522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:46.912 [2024-12-04 14:16:48.210538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:46.912 [2024-12-04 14:16:48.210554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:46.912 [2024-12-04 14:16:48.210564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:46.912 [2024-12-04 14:16:48.210569] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:46.912 [2024-12-04 14:16:48.210574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:46.912 [2024-12-04 14:16:48.210579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:46.912 [2024-12-04 14:16:48.210589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:15:46.912 [2024-12-04 14:16:48.210594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:46.912 [2024-12-04 14:16:48.210604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:15:46.912 [2024-12-04 14:16:48.210609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:46.912 [2024-12-04 14:16:48.210619] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:15:46.912 [2024-12-04 14:16:48.210624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:46.912 [2024-12-04 14:16:48.210634] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:46.912 [2024-12-04 14:16:48.210648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:46.912 [2024-12-04 14:16:48.210662] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:46.912 [2024-12-04 14:16:48.210677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:46.912 [2024-12-04 14:16:48.210692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:46.912 [2024-12-04 14:16:48.210701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:46.912 [2024-12-04 14:16:48.210706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:15:46.912 [2024-12-04 14:16:48.210710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:46.912 [2024-12-04 14:16:48.210714] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:46.912 [2024-12-04 14:16:48.210720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:46.912 [2024-12-04 14:16:48.210725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:46.912 [2024-12-04 14:16:48.210738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:46.912 [2024-12-04 14:16:48.210743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:46.912 [2024-12-04 14:16:48.210748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:46.912 [2024-12-04 14:16:48.210754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:46.912 [2024-12-04 14:16:48.210759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:46.912 [2024-12-04 14:16:48.210764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:46.912 [2024-12-04 14:16:48.210769] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:46.912 [2024-12-04 14:16:48.210776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:46.912 [2024-12-04 14:16:48.210782] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:46.912 [2024-12-04 14:16:48.210788] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:15:46.912 [2024-12-04 14:16:48.210793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:15:46.912 [2024-12-04 14:16:48.210799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:15:46.913 [2024-12-04 14:16:48.210804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:15:46.913 [2024-12-04 14:16:48.210809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:15:46.913 [2024-12-04 14:16:48.210814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:15:46.913 [2024-12-04 14:16:48.210819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:15:46.913 [2024-12-04 14:16:48.210825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:15:46.913 [2024-12-04 14:16:48.210830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:15:46.913 [2024-12-04 14:16:48.210835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:15:46.913 [2024-12-04 14:16:48.210840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:15:46.913 [2024-12-04 14:16:48.210846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:15:46.913 [2024-12-04 14:16:48.210851] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:46.913 [2024-12-04 14:16:48.210860] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:46.913 [2024-12-04 14:16:48.210866] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:46.913 [2024-12-04 14:16:48.210871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:46.913 [2024-12-04 14:16:48.210877] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:46.913 [2024-12-04 14:16:48.210882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:46.913 [2024-12-04 14:16:48.210888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.210894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:46.913 [2024-12-04 14:16:48.210899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:15:46.913 [2024-12-04 14:16:48.210904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.222735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.222761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:46.913 [2024-12-04 14:16:48.222769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.800 ms 00:15:46.913 [2024-12-04 14:16:48.222776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.222862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.222869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:46.913 [2024-12-04 14:16:48.222876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:15:46.913 [2024-12-04 14:16:48.222881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.266472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.266500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:46.913 [2024-12-04 14:16:48.266510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.575 ms 00:15:46.913 [2024-12-04 14:16:48.266518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.266575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.266584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:46.913 [2024-12-04 14:16:48.266593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:46.913 [2024-12-04 14:16:48.266599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.266872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.266884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:46.913 [2024-12-04 14:16:48.266890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:15:46.913 [2024-12-04 14:16:48.266896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.266989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.266995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:46.913 [2024-12-04 14:16:48.267001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:15:46.913 [2024-12-04 14:16:48.267006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.278347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.278368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:46.913 [2024-12-04 14:16:48.278376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.323 ms 00:15:46.913 [2024-12-04 14:16:48.278383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.288065] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:46.913 [2024-12-04 14:16:48.288099] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:46.913 [2024-12-04 14:16:48.288107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.288113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:46.913 [2024-12-04 14:16:48.288120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.651 ms 00:15:46.913 [2024-12-04 14:16:48.288125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.306433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.306470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:46.913 [2024-12-04 14:16:48.306478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.261 ms 00:15:46.913 [2024-12-04 14:16:48.306485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.315401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.315423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:46.913 [2024-12-04 14:16:48.315436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.865 ms 00:15:46.913 [2024-12-04 14:16:48.315441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.324065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.324092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:46.913 [2024-12-04 14:16:48.324099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.584 ms 00:15:46.913 [2024-12-04 14:16:48.324105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.324373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.324386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:46.913 [2024-12-04 14:16:48.324392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:15:46.913 [2024-12-04 14:16:48.324400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:46.913 [2024-12-04 14:16:48.369612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:46.913 [2024-12-04 14:16:48.369640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:46.913 [2024-12-04 14:16:48.369650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.194 ms 00:15:46.913 [2024-12-04 14:16:48.369659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.377652] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:47.172 [2024-12-04 14:16:48.389320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.389346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:47.172 [2024-12-04 14:16:48.389355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.591 ms 00:15:47.172 [2024-12-04 14:16:48.389362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.389415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.389423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:47.172 [2024-12-04 14:16:48.389432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:47.172 [2024-12-04 14:16:48.389438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.389475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.389482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:47.172 [2024-12-04 14:16:48.389487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:47.172 [2024-12-04 14:16:48.389492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.390431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.390452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:47.172 [2024-12-04 14:16:48.390459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:15:47.172 [2024-12-04 14:16:48.390465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.390489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.390498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:47.172 [2024-12-04 14:16:48.390505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:47.172 [2024-12-04 14:16:48.390511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.390536] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:47.172 [2024-12-04 14:16:48.390543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.390549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:47.172 [2024-12-04 14:16:48.390555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:47.172 [2024-12-04 14:16:48.390560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.408834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.408941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:47.172 [2024-12-04 14:16:48.408955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:15:47.172 [2024-12-04 14:16:48.408961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.409027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:47.172 [2024-12-04 14:16:48.409035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:47.172 [2024-12-04 14:16:48.409042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:47.172 [2024-12-04 14:16:48.409047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:47.172 [2024-12-04 14:16:48.409725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:47.172 [2024-12-04 14:16:48.412242] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 220.344 ms, result 0 00:15:47.172 [2024-12-04 14:16:48.412913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:47.172 [2024-12-04 14:16:48.428173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:48.116  [2024-12-04T14:16:50.525Z] Copying: 35/256 [MB] (35 MBps) [2024-12-04T14:16:51.466Z] Copying: 59/256 [MB] (23 MBps) [2024-12-04T14:16:52.850Z] Copying: 84/256 [MB] (24 MBps) [2024-12-04T14:16:53.794Z] Copying: 106/256 [MB] (22 MBps) [2024-12-04T14:16:54.739Z] Copying: 128/256 [MB] (21 MBps) [2024-12-04T14:16:55.680Z] Copying: 150/256 [MB] (22 MBps) [2024-12-04T14:16:56.626Z] Copying: 175/256 [MB] (24 MBps) [2024-12-04T14:16:57.566Z] Copying: 195/256 [MB] (20 MBps) [2024-12-04T14:16:58.504Z] Copying: 220/256 [MB] (25 MBps) [2024-12-04T14:16:59.076Z] Copying: 246/256 [MB] (25 MBps) [2024-12-04T14:16:59.076Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-04 14:16:58.844840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:57.611 [2024-12-04 14:16:58.854055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.611 [2024-12-04 14:16:58.854112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:57.611 [2024-12-04 14:16:58.854125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:57.612 [2024-12-04 14:16:58.854133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.854154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:57.612 [2024-12-04 14:16:58.856736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.856850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:57.612 [2024-12-04 14:16:58.856865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:15:57.612 [2024-12-04 14:16:58.856873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.857158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.857169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:57.612 [2024-12-04 14:16:58.857177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:15:57.612 [2024-12-04 14:16:58.857187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.860865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.860884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:57.612 [2024-12-04 14:16:58.860893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:15:57.612 [2024-12-04 14:16:58.860901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.867789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.867891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:57.612 [2024-12-04 14:16:58.867906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:15:57.612 [2024-12-04 14:16:58.867914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.891764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.891873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:57.612 [2024-12-04 14:16:58.891926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.788 ms 00:15:57.612 [2024-12-04 14:16:58.891948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.905932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.906037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:57.612 [2024-12-04 14:16:58.906105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.934 ms 00:15:57.612 [2024-12-04 14:16:58.906129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.906384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.906472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:57.612 [2024-12-04 14:16:58.906496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:57.612 [2024-12-04 14:16:58.906514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.930311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.930410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:57.612 [2024-12-04 14:16:58.930457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.769 ms 00:15:57.612 [2024-12-04 14:16:58.930477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.953504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.953599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:57.612 [2024-12-04 14:16:58.953644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.965 ms 00:15:57.612 [2024-12-04 14:16:58.953665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:58.977129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:58.977239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:57.612 [2024-12-04 14:16:58.977288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.415 ms 00:15:57.612 [2024-12-04 14:16:58.977309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:59.000595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.612 [2024-12-04 14:16:59.000696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:57.612 [2024-12-04 14:16:59.000742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.209 ms 00:15:57.612 [2024-12-04 14:16:59.000763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.612 [2024-12-04 14:16:59.000827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:57.612 [2024-12-04 14:16:59.000857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.000888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.000916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.000943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.001855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.002821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:57.612 [2024-12-04 14:16:59.003750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.003994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:57.613 [2024-12-04 14:16:59.004154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:57.613 [2024-12-04 14:16:59.004162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:15:57.613 [2024-12-04 14:16:59.004170] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:57.613 [2024-12-04 14:16:59.004176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:57.613 [2024-12-04 14:16:59.004183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:57.613 [2024-12-04 14:16:59.004190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:57.613 [2024-12-04 14:16:59.004197] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:57.613 [2024-12-04 14:16:59.004207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:57.613 [2024-12-04 14:16:59.004214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:57.613 [2024-12-04 14:16:59.004220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:57.613 [2024-12-04 14:16:59.004226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:57.613 [2024-12-04 14:16:59.004234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.613 [2024-12-04 14:16:59.004242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:57.613 [2024-12-04 14:16:59.004251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:15:57.613 [2024-12-04 14:16:59.004257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.613 [2024-12-04 14:16:59.016573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.613 [2024-12-04 14:16:59.016603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:57.613 [2024-12-04 14:16:59.016617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.270 ms 00:15:57.613 [2024-12-04 14:16:59.016624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.613 [2024-12-04 14:16:59.016832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:57.613 [2024-12-04 14:16:59.016846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:57.613 [2024-12-04 14:16:59.016854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:15:57.613 [2024-12-04 14:16:59.016861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.613 [2024-12-04 14:16:59.053765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.613 [2024-12-04 14:16:59.053884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:57.613 [2024-12-04 14:16:59.053903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.613 [2024-12-04 14:16:59.053910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.613 [2024-12-04 14:16:59.053984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.613 [2024-12-04 14:16:59.053992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:57.613 [2024-12-04 14:16:59.054000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.613 [2024-12-04 14:16:59.054007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.613 [2024-12-04 14:16:59.054045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.613 [2024-12-04 14:16:59.054053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:57.613 [2024-12-04 14:16:59.054060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.613 [2024-12-04 14:16:59.054070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.613 [2024-12-04 14:16:59.054194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.613 [2024-12-04 14:16:59.054210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:57.613 [2024-12-04 14:16:59.054218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.613 [2024-12-04 14:16:59.054225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.127703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.127835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:57.874 [2024-12-04 14:16:59.127855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.127862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:57.874 [2024-12-04 14:16:59.156398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:57.874 [2024-12-04 14:16:59.156470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:57.874 [2024-12-04 14:16:59.156525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:57.874 [2024-12-04 14:16:59.156630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:57.874 [2024-12-04 14:16:59.156688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:57.874 [2024-12-04 14:16:59.156748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:57.874 [2024-12-04 14:16:59.156811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:57.874 [2024-12-04 14:16:59.156818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:57.874 [2024-12-04 14:16:59.156825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:57.874 [2024-12-04 14:16:59.156956] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.899 ms, result 0 00:15:58.815 00:15:58.815 00:15:58.815 14:16:59 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:15:58.815 14:16:59 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:59.076 14:17:00 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:59.337 [2024-12-04 14:17:00.572842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:59.337 [2024-12-04 14:17:00.573059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72203 ] 00:15:59.337 [2024-12-04 14:17:00.722274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.597 [2024-12-04 14:17:00.896618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.858 [2024-12-04 14:17:01.147139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:59.858 [2024-12-04 14:17:01.147347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:59.858 [2024-12-04 14:17:01.298050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.298221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:59.858 [2024-12-04 14:17:01.298285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:59.858 [2024-12-04 14:17:01.298308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.301003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.301126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:59.858 [2024-12-04 14:17:01.301180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:15:59.858 [2024-12-04 14:17:01.301202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.301552] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:59.858 [2024-12-04 14:17:01.302420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:59.858 [2024-12-04 14:17:01.302540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.302594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:59.858 [2024-12-04 14:17:01.302618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:15:59.858 [2024-12-04 14:17:01.302636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.303730] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:59.858 [2024-12-04 14:17:01.316364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.316471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:59.858 [2024-12-04 14:17:01.316524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.635 ms 00:15:59.858 [2024-12-04 14:17:01.316535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.316668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.316685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:59.858 [2024-12-04 14:17:01.316694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:59.858 [2024-12-04 14:17:01.316701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.321562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.321590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:59.858 [2024-12-04 14:17:01.321600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:15:59.858 [2024-12-04 14:17:01.321611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.321710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.321721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:59.858 [2024-12-04 14:17:01.321730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:59.858 [2024-12-04 14:17:01.321738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.321764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.858 [2024-12-04 14:17:01.321773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:59.858 [2024-12-04 14:17:01.321782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:59.858 [2024-12-04 14:17:01.321789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.858 [2024-12-04 14:17:01.321819] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:00.120 [2024-12-04 14:17:01.325215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.120 [2024-12-04 14:17:01.325240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:00.120 [2024-12-04 14:17:01.325249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:16:00.120 [2024-12-04 14:17:01.325258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.120 [2024-12-04 14:17:01.325293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.120 [2024-12-04 14:17:01.325301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:00.120 [2024-12-04 14:17:01.325309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:00.120 [2024-12-04 14:17:01.325315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.120 [2024-12-04 14:17:01.325332] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:00.120 [2024-12-04 14:17:01.325349] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:00.120 [2024-12-04 14:17:01.325381] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:00.120 [2024-12-04 14:17:01.325398] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:00.120 [2024-12-04 14:17:01.325469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:00.120 [2024-12-04 14:17:01.325479] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:00.120 [2024-12-04 14:17:01.325489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:00.120 [2024-12-04 14:17:01.325498] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:00.120 [2024-12-04 14:17:01.325506] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:00.120 [2024-12-04 14:17:01.325513] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:00.120 [2024-12-04 14:17:01.325520] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:00.120 [2024-12-04 14:17:01.325527] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:00.120 [2024-12-04 14:17:01.325536] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:00.120 [2024-12-04 14:17:01.325543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.120 [2024-12-04 14:17:01.325550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:00.120 [2024-12-04 14:17:01.325558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:16:00.120 [2024-12-04 14:17:01.325564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.120 [2024-12-04 14:17:01.325628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.120 [2024-12-04 14:17:01.325636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:00.120 [2024-12-04 14:17:01.325643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:00.120 [2024-12-04 14:17:01.325650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.121 [2024-12-04 14:17:01.325736] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:00.121 [2024-12-04 14:17:01.325746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:00.121 [2024-12-04 14:17:01.325753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:00.121 [2024-12-04 14:17:01.325774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:00.121 [2024-12-04 14:17:01.325794] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:00.121 [2024-12-04 14:17:01.325807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:00.121 [2024-12-04 14:17:01.325813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:00.121 [2024-12-04 14:17:01.325820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:00.121 [2024-12-04 14:17:01.325827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:00.121 [2024-12-04 14:17:01.325839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:00.121 [2024-12-04 14:17:01.325845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:00.121 [2024-12-04 14:17:01.325858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:00.121 [2024-12-04 14:17:01.325864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325871] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:00.121 [2024-12-04 14:17:01.325877] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:00.121 [2024-12-04 14:17:01.325883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325889] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:00.121 [2024-12-04 14:17:01.325896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:00.121 [2024-12-04 14:17:01.325914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:00.121 [2024-12-04 14:17:01.325932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:00.121 [2024-12-04 14:17:01.325950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:00.121 [2024-12-04 14:17:01.325962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:00.121 [2024-12-04 14:17:01.325969] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:00.121 [2024-12-04 14:17:01.325975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:00.121 [2024-12-04 14:17:01.325981] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:00.121 [2024-12-04 14:17:01.325987] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:00.121 [2024-12-04 14:17:01.325993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:00.121 [2024-12-04 14:17:01.325999] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:00.121 [2024-12-04 14:17:01.326006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:00.121 [2024-12-04 14:17:01.326012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:00.121 [2024-12-04 14:17:01.326022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:00.121 [2024-12-04 14:17:01.326030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:00.121 [2024-12-04 14:17:01.326037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:00.121 [2024-12-04 14:17:01.326044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:00.121 [2024-12-04 14:17:01.326050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:00.121 [2024-12-04 14:17:01.326057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:00.121 [2024-12-04 14:17:01.326063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:00.121 [2024-12-04 14:17:01.326071] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:00.121 [2024-12-04 14:17:01.326080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:00.121 [2024-12-04 14:17:01.326121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:00.121 [2024-12-04 14:17:01.326129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:00.121 [2024-12-04 14:17:01.326136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:00.121 [2024-12-04 14:17:01.326143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:00.121 [2024-12-04 14:17:01.326151] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:00.121 [2024-12-04 14:17:01.326158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:00.121 [2024-12-04 14:17:01.326165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:00.121 [2024-12-04 14:17:01.326172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:00.121 [2024-12-04 14:17:01.326179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:00.121 [2024-12-04 14:17:01.326187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:00.121 [2024-12-04 14:17:01.326194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:00.121 [2024-12-04 14:17:01.326201] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:00.121 [2024-12-04 14:17:01.326208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:00.121 [2024-12-04 14:17:01.326215] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:00.121 [2024-12-04 14:17:01.326228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:00.121 [2024-12-04 14:17:01.326236] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:00.121 [2024-12-04 14:17:01.326243] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:00.121 [2024-12-04 14:17:01.326250] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:00.121 [2024-12-04 14:17:01.326257] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:00.121 [2024-12-04 14:17:01.326264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.121 [2024-12-04 14:17:01.326272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:00.122 [2024-12-04 14:17:01.326278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:16:00.122 [2024-12-04 14:17:01.326285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.340984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.341103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:00.122 [2024-12-04 14:17:01.341154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.658 ms 00:16:00.122 [2024-12-04 14:17:01.341177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.341302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.341326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:00.122 [2024-12-04 14:17:01.341346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:00.122 [2024-12-04 14:17:01.341401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.380633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.380756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:00.122 [2024-12-04 14:17:01.380811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.196 ms 00:16:00.122 [2024-12-04 14:17:01.380834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.380910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.380936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:00.122 [2024-12-04 14:17:01.380961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:00.122 [2024-12-04 14:17:01.380978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.381313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.381349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:00.122 [2024-12-04 14:17:01.381369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:16:00.122 [2024-12-04 14:17:01.381387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.381513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.381580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:00.122 [2024-12-04 14:17:01.381599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:00.122 [2024-12-04 14:17:01.381617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.395604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.395702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:00.122 [2024-12-04 14:17:01.395748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.912 ms 00:16:00.122 [2024-12-04 14:17:01.395773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.408663] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:00.122 [2024-12-04 14:17:01.408789] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:00.122 [2024-12-04 14:17:01.408844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.408864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:00.122 [2024-12-04 14:17:01.408883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.965 ms 00:16:00.122 [2024-12-04 14:17:01.408901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.433560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.433662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:00.122 [2024-12-04 14:17:01.433709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.588 ms 00:16:00.122 [2024-12-04 14:17:01.433731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.446174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.446291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:00.122 [2024-12-04 14:17:01.446353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.932 ms 00:16:00.122 [2024-12-04 14:17:01.446375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.458808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.458929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:00.122 [2024-12-04 14:17:01.458983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.573 ms 00:16:00.122 [2024-12-04 14:17:01.459006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.459432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.459473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:00.122 [2024-12-04 14:17:01.459564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:00.122 [2024-12-04 14:17:01.459589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.516633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.516765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:00.122 [2024-12-04 14:17:01.516815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.953 ms 00:16:00.122 [2024-12-04 14:17:01.516842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.527505] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:00.122 [2024-12-04 14:17:01.541035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.541164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:00.122 [2024-12-04 14:17:01.541218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.789 ms 00:16:00.122 [2024-12-04 14:17:01.541242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.541320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.541345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:00.122 [2024-12-04 14:17:01.541368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:00.122 [2024-12-04 14:17:01.541387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.541447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.541468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:00.122 [2024-12-04 14:17:01.541488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:00.122 [2024-12-04 14:17:01.541550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.542718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.542809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:00.122 [2024-12-04 14:17:01.542861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:16:00.122 [2024-12-04 14:17:01.542882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.543192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.543242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:00.122 [2024-12-04 14:17:01.543403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:00.122 [2024-12-04 14:17:01.543433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.543488] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:00.122 [2024-12-04 14:17:01.543511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.543530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:00.122 [2024-12-04 14:17:01.543549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:00.122 [2024-12-04 14:17:01.543566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.122 [2024-12-04 14:17:01.567341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.122 [2024-12-04 14:17:01.567448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:00.122 [2024-12-04 14:17:01.567497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.741 ms 00:16:00.122 [2024-12-04 14:17:01.567519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.123 [2024-12-04 14:17:01.567885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.123 [2024-12-04 14:17:01.567941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:00.123 [2024-12-04 14:17:01.568014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:00.123 [2024-12-04 14:17:01.568025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.123 [2024-12-04 14:17:01.568818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:00.123 [2024-12-04 14:17:01.571971] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.495 ms, result 0 00:16:00.123 [2024-12-04 14:17:01.572911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:00.426 [2024-12-04 14:17:01.586207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:00.426  [2024-12-04T14:17:01.891Z] Copying: 4096/4096 [kB] (average 18 MBps)[2024-12-04 14:17:01.808637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:00.426 [2024-12-04 14:17:01.817296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.426 [2024-12-04 14:17:01.817335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:00.426 [2024-12-04 14:17:01.817346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:00.426 [2024-12-04 14:17:01.817354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.426 [2024-12-04 14:17:01.817374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:00.426 [2024-12-04 14:17:01.819881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.426 [2024-12-04 14:17:01.819990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:00.426 [2024-12-04 14:17:01.820005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:16:00.426 [2024-12-04 14:17:01.820013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.426 [2024-12-04 14:17:01.822890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.426 [2024-12-04 14:17:01.822926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:00.426 [2024-12-04 14:17:01.822935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.854 ms 00:16:00.426 [2024-12-04 14:17:01.822946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.426 [2024-12-04 14:17:01.827380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.426 [2024-12-04 14:17:01.827470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:00.426 [2024-12-04 14:17:01.827520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.417 ms 00:16:00.426 [2024-12-04 14:17:01.827542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.426 [2024-12-04 14:17:01.834400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.426 [2024-12-04 14:17:01.834426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:00.426 [2024-12-04 14:17:01.834437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.818 ms 00:16:00.426 [2024-12-04 14:17:01.834448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.426 [2024-12-04 14:17:01.857954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.426 [2024-12-04 14:17:01.858061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:00.426 [2024-12-04 14:17:01.858075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:16:00.426 [2024-12-04 14:17:01.858082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.872594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.696 [2024-12-04 14:17:01.872725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:00.696 [2024-12-04 14:17:01.872789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.454 ms 00:16:00.696 [2024-12-04 14:17:01.872888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.873050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.696 [2024-12-04 14:17:01.873076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:00.696 [2024-12-04 14:17:01.873160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:00.696 [2024-12-04 14:17:01.873183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.897166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.696 [2024-12-04 14:17:01.897269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:00.696 [2024-12-04 14:17:01.897316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.954 ms 00:16:00.696 [2024-12-04 14:17:01.897337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.921235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.696 [2024-12-04 14:17:01.921357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:00.696 [2024-12-04 14:17:01.921416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.610 ms 00:16:00.696 [2024-12-04 14:17:01.921439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.944886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.696 [2024-12-04 14:17:01.945009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:00.696 [2024-12-04 14:17:01.945063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.141 ms 00:16:00.696 [2024-12-04 14:17:01.945095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.968483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.696 [2024-12-04 14:17:01.968605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:00.696 [2024-12-04 14:17:01.968661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.079 ms 00:16:00.696 [2024-12-04 14:17:01.968683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.696 [2024-12-04 14:17:01.968736] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:00.696 [2024-12-04 14:17:01.968765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.968996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.969994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:00.696 [2024-12-04 14:17:01.970687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.970927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:00.697 [2024-12-04 14:17:01.971550] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:00.697 [2024-12-04 14:17:01.971558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:16:00.697 [2024-12-04 14:17:01.971566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:00.697 [2024-12-04 14:17:01.971572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:00.697 [2024-12-04 14:17:01.971579] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:00.697 [2024-12-04 14:17:01.971586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:00.697 [2024-12-04 14:17:01.971596] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:00.697 [2024-12-04 14:17:01.971603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:00.698 [2024-12-04 14:17:01.971610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:00.698 [2024-12-04 14:17:01.971616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:00.698 [2024-12-04 14:17:01.971622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:00.698 [2024-12-04 14:17:01.971630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.698 [2024-12-04 14:17:01.971638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:00.698 [2024-12-04 14:17:01.971647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:16:00.698 [2024-12-04 14:17:01.971654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:01.983998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.698 [2024-12-04 14:17:01.984026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:00.698 [2024-12-04 14:17:01.984041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.298 ms 00:16:00.698 [2024-12-04 14:17:01.984049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:01.984274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.698 [2024-12-04 14:17:01.984289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:00.698 [2024-12-04 14:17:01.984297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:16:00.698 [2024-12-04 14:17:01.984303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.021419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.021452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:00.698 [2024-12-04 14:17:02.021466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.021473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.021551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.021559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:00.698 [2024-12-04 14:17:02.021567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.021574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.021613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.021621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:00.698 [2024-12-04 14:17:02.021629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.021638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.021654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.021662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:00.698 [2024-12-04 14:17:02.021669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.021675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.095683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.095815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:00.698 [2024-12-04 14:17:02.095838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.095846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:00.698 [2024-12-04 14:17:02.125205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:00.698 [2024-12-04 14:17:02.125279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:00.698 [2024-12-04 14:17:02.125332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:00.698 [2024-12-04 14:17:02.125443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:00.698 [2024-12-04 14:17:02.125496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:00.698 [2024-12-04 14:17:02.125555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.698 [2024-12-04 14:17:02.125619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:00.698 [2024-12-04 14:17:02.125627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.698 [2024-12-04 14:17:02.125634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.698 [2024-12-04 14:17:02.125764] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 308.466 ms, result 0 00:16:01.640 00:16:01.640 00:16:01.640 14:17:02 -- ftl/trim.sh@93 -- # svcpid=72233 00:16:01.640 14:17:02 -- ftl/trim.sh@94 -- # waitforlisten 72233 00:16:01.640 14:17:02 -- common/autotest_common.sh@829 -- # '[' -z 72233 ']' 00:16:01.640 14:17:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.640 14:17:02 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:01.640 14:17:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.640 14:17:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.640 14:17:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.640 14:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:01.640 [2024-12-04 14:17:03.032664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:01.640 [2024-12-04 14:17:03.032782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72233 ] 00:16:01.911 [2024-12-04 14:17:03.180192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.911 [2024-12-04 14:17:03.356139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:01.911 [2024-12-04 14:17:03.356338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.299 14:17:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.299 14:17:04 -- common/autotest_common.sh@862 -- # return 0 00:16:03.299 14:17:04 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:03.299 [2024-12-04 14:17:04.721915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:03.299 [2024-12-04 14:17:04.721972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:03.563 [2024-12-04 14:17:04.889128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.889171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:03.563 [2024-12-04 14:17:04.889199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:03.563 [2024-12-04 14:17:04.889207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.891790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.891825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:03.563 [2024-12-04 14:17:04.891837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:16:03.563 [2024-12-04 14:17:04.891844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.891915] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:03.563 [2024-12-04 14:17:04.892635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:03.563 [2024-12-04 14:17:04.892706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.892715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:03.563 [2024-12-04 14:17:04.892726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:16:03.563 [2024-12-04 14:17:04.892733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.893848] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:03.563 [2024-12-04 14:17:04.906674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.906708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:03.563 [2024-12-04 14:17:04.906719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.832 ms 00:16:03.563 [2024-12-04 14:17:04.906729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.906800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.906812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:03.563 [2024-12-04 14:17:04.906820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:03.563 [2024-12-04 14:17:04.906829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.911689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.911722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:03.563 [2024-12-04 14:17:04.911731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:16:03.563 [2024-12-04 14:17:04.911739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.911823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.911834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:03.563 [2024-12-04 14:17:04.911841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:03.563 [2024-12-04 14:17:04.911850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.911875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.911884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:03.563 [2024-12-04 14:17:04.911892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:03.563 [2024-12-04 14:17:04.911902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.911928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:03.563 [2024-12-04 14:17:04.915359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.915384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:03.563 [2024-12-04 14:17:04.915394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.438 ms 00:16:03.563 [2024-12-04 14:17:04.915401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.915439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.915447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:03.563 [2024-12-04 14:17:04.915456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:03.563 [2024-12-04 14:17:04.915465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.915486] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:03.563 [2024-12-04 14:17:04.915502] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:03.563 [2024-12-04 14:17:04.915535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:03.563 [2024-12-04 14:17:04.915550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:03.563 [2024-12-04 14:17:04.915623] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:03.563 [2024-12-04 14:17:04.915632] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:03.563 [2024-12-04 14:17:04.915647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:03.563 [2024-12-04 14:17:04.915656] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:03.563 [2024-12-04 14:17:04.915666] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:03.563 [2024-12-04 14:17:04.915674] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:03.563 [2024-12-04 14:17:04.915682] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:03.563 [2024-12-04 14:17:04.915689] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:03.563 [2024-12-04 14:17:04.915699] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:03.563 [2024-12-04 14:17:04.915706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.915714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:03.563 [2024-12-04 14:17:04.915722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:16:03.563 [2024-12-04 14:17:04.915730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.915796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.563 [2024-12-04 14:17:04.915805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:03.563 [2024-12-04 14:17:04.915812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:03.563 [2024-12-04 14:17:04.915820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.563 [2024-12-04 14:17:04.915906] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:03.563 [2024-12-04 14:17:04.915918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:03.563 [2024-12-04 14:17:04.915926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:03.563 [2024-12-04 14:17:04.915935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.563 [2024-12-04 14:17:04.915942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:03.563 [2024-12-04 14:17:04.915950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:03.563 [2024-12-04 14:17:04.915957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:03.563 [2024-12-04 14:17:04.915969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:03.563 [2024-12-04 14:17:04.915975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:03.563 [2024-12-04 14:17:04.915983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:03.563 [2024-12-04 14:17:04.915991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:03.563 [2024-12-04 14:17:04.916000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:03.563 [2024-12-04 14:17:04.916006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:03.563 [2024-12-04 14:17:04.916014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:03.563 [2024-12-04 14:17:04.916020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:03.563 [2024-12-04 14:17:04.916028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.563 [2024-12-04 14:17:04.916034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:03.563 [2024-12-04 14:17:04.916042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:03.563 [2024-12-04 14:17:04.916048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.563 [2024-12-04 14:17:04.916056] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:03.563 [2024-12-04 14:17:04.916062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:03.563 [2024-12-04 14:17:04.916070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:03.563 [2024-12-04 14:17:04.916077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:03.563 [2024-12-04 14:17:04.916097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:03.563 [2024-12-04 14:17:04.916104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.564 [2024-12-04 14:17:04.916118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:03.564 [2024-12-04 14:17:04.916125] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:03.564 [2024-12-04 14:17:04.916133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.564 [2024-12-04 14:17:04.916139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:03.564 [2024-12-04 14:17:04.916147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:03.564 [2024-12-04 14:17:04.916153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.564 [2024-12-04 14:17:04.916162] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:03.564 [2024-12-04 14:17:04.916168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:03.564 [2024-12-04 14:17:04.916176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:03.564 [2024-12-04 14:17:04.916183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:03.564 [2024-12-04 14:17:04.916191] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:03.564 [2024-12-04 14:17:04.916197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:03.564 [2024-12-04 14:17:04.916205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:03.564 [2024-12-04 14:17:04.916212] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:03.564 [2024-12-04 14:17:04.916222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:03.564 [2024-12-04 14:17:04.916228] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:03.564 [2024-12-04 14:17:04.916239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:03.564 [2024-12-04 14:17:04.916246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:03.564 [2024-12-04 14:17:04.916255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.564 [2024-12-04 14:17:04.916263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:03.564 [2024-12-04 14:17:04.916271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:03.564 [2024-12-04 14:17:04.916277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:03.564 [2024-12-04 14:17:04.916285] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:03.564 [2024-12-04 14:17:04.916291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:03.564 [2024-12-04 14:17:04.916299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:03.564 [2024-12-04 14:17:04.916307] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:03.564 [2024-12-04 14:17:04.916317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:03.564 [2024-12-04 14:17:04.916325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:03.564 [2024-12-04 14:17:04.916334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:03.564 [2024-12-04 14:17:04.916340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:03.564 [2024-12-04 14:17:04.916352] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:03.564 [2024-12-04 14:17:04.916359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:03.564 [2024-12-04 14:17:04.916368] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:03.564 [2024-12-04 14:17:04.916375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:03.564 [2024-12-04 14:17:04.916383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:03.564 [2024-12-04 14:17:04.916390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:03.564 [2024-12-04 14:17:04.916399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:03.564 [2024-12-04 14:17:04.916405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:03.564 [2024-12-04 14:17:04.916414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:03.564 [2024-12-04 14:17:04.916422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:03.564 [2024-12-04 14:17:04.916430] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:03.564 [2024-12-04 14:17:04.916438] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:03.564 [2024-12-04 14:17:04.916447] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:03.564 [2024-12-04 14:17:04.916454] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:03.564 [2024-12-04 14:17:04.916463] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:03.564 [2024-12-04 14:17:04.916471] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:03.564 [2024-12-04 14:17:04.916481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.916488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:03.564 [2024-12-04 14:17:04.916497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:16:03.564 [2024-12-04 14:17:04.916504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.931228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.931259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:03.564 [2024-12-04 14:17:04.931274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.675 ms 00:16:03.564 [2024-12-04 14:17:04.931283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.931396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.931406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:03.564 [2024-12-04 14:17:04.931416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:03.564 [2024-12-04 14:17:04.931423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.962060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.962205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:03.564 [2024-12-04 14:17:04.962225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.615 ms 00:16:03.564 [2024-12-04 14:17:04.962233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.962291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.962302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:03.564 [2024-12-04 14:17:04.962311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:03.564 [2024-12-04 14:17:04.962318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.962629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.962642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:03.564 [2024-12-04 14:17:04.962654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:03.564 [2024-12-04 14:17:04.962661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.962774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.962782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:03.564 [2024-12-04 14:17:04.962793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:03.564 [2024-12-04 14:17:04.962800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.977372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.977478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:03.564 [2024-12-04 14:17:04.977497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.551 ms 00:16:03.564 [2024-12-04 14:17:04.977505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:04.990220] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:03.564 [2024-12-04 14:17:04.990249] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:03.564 [2024-12-04 14:17:04.990262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:04.990270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:03.564 [2024-12-04 14:17:04.990280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.659 ms 00:16:03.564 [2024-12-04 14:17:04.990286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.564 [2024-12-04 14:17:05.014565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.564 [2024-12-04 14:17:05.014596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:03.565 [2024-12-04 14:17:05.014610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.211 ms 00:16:03.565 [2024-12-04 14:17:05.014617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.822 [2024-12-04 14:17:05.026522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.822 [2024-12-04 14:17:05.026555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:03.822 [2024-12-04 14:17:05.026567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.837 ms 00:16:03.822 [2024-12-04 14:17:05.026574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.822 [2024-12-04 14:17:05.038298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.822 [2024-12-04 14:17:05.038324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:03.822 [2024-12-04 14:17:05.038338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.662 ms 00:16:03.822 [2024-12-04 14:17:05.038345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.038704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.038719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:03.823 [2024-12-04 14:17:05.038732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:16:03.823 [2024-12-04 14:17:05.038739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.093675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.093709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:03.823 [2024-12-04 14:17:05.093722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.913 ms 00:16:03.823 [2024-12-04 14:17:05.093728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.101823] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:03.823 [2024-12-04 14:17:05.113278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.113311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:03.823 [2024-12-04 14:17:05.113322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.488 ms 00:16:03.823 [2024-12-04 14:17:05.113330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.113385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.113395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:03.823 [2024-12-04 14:17:05.113402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:03.823 [2024-12-04 14:17:05.113411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.113448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.113456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:03.823 [2024-12-04 14:17:05.113461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:03.823 [2024-12-04 14:17:05.113468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.114439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.114465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:03.823 [2024-12-04 14:17:05.114472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:16:03.823 [2024-12-04 14:17:05.114479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.114505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.114516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:03.823 [2024-12-04 14:17:05.114521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:03.823 [2024-12-04 14:17:05.114528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.114554] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:03.823 [2024-12-04 14:17:05.114564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.114570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:03.823 [2024-12-04 14:17:05.114577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:03.823 [2024-12-04 14:17:05.114582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.132670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.132772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:03.823 [2024-12-04 14:17:05.132790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.068 ms 00:16:03.823 [2024-12-04 14:17:05.132796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.132862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.823 [2024-12-04 14:17:05.132870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:03.823 [2024-12-04 14:17:05.132878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:03.823 [2024-12-04 14:17:05.132886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.823 [2024-12-04 14:17:05.133512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:03.823 [2024-12-04 14:17:05.135926] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 244.189 ms, result 0 00:16:03.823 [2024-12-04 14:17:05.136728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:03.823 Some configs were skipped because the RPC state that can call them passed over. 00:16:03.823 14:17:05 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:04.079 [2024-12-04 14:17:05.370263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.079 [2024-12-04 14:17:05.370375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:04.079 [2024-12-04 14:17:05.370418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.759 ms 00:16:04.079 [2024-12-04 14:17:05.370438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.079 [2024-12-04 14:17:05.370478] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.973 ms, result 0 00:16:04.079 true 00:16:04.079 14:17:05 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:04.337 [2024-12-04 14:17:05.577080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.337 [2024-12-04 14:17:05.577200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:16:04.337 [2024-12-04 14:17:05.577244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.521 ms 00:16:04.337 [2024-12-04 14:17:05.577261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.337 [2024-12-04 14:17:05.577303] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.742 ms, result 0 00:16:04.337 true 00:16:04.337 14:17:05 -- ftl/trim.sh@102 -- # killprocess 72233 00:16:04.337 14:17:05 -- common/autotest_common.sh@936 -- # '[' -z 72233 ']' 00:16:04.337 14:17:05 -- common/autotest_common.sh@940 -- # kill -0 72233 00:16:04.337 14:17:05 -- common/autotest_common.sh@941 -- # uname 00:16:04.337 14:17:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.337 14:17:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72233 00:16:04.337 14:17:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:04.337 14:17:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:04.337 14:17:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72233' 00:16:04.337 killing process with pid 72233 00:16:04.337 14:17:05 -- common/autotest_common.sh@955 -- # kill 72233 00:16:04.337 14:17:05 -- common/autotest_common.sh@960 -- # wait 72233 00:16:04.908 [2024-12-04 14:17:06.160351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.160388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:04.908 [2024-12-04 14:17:06.160398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:04.908 [2024-12-04 14:17:06.160407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.160424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:04.908 [2024-12-04 14:17:06.162549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.162568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:04.908 [2024-12-04 14:17:06.162578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:16:04.908 [2024-12-04 14:17:06.162585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.162791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.162798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:04.908 [2024-12-04 14:17:06.162805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:16:04.908 [2024-12-04 14:17:06.162810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.165994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.166161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:04.908 [2024-12-04 14:17:06.166221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:16:04.908 [2024-12-04 14:17:06.166240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.171659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.171748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:04.908 [2024-12-04 14:17:06.171796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:16:04.908 [2024-12-04 14:17:06.171815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.179426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.179509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:04.908 [2024-12-04 14:17:06.179560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.532 ms 00:16:04.908 [2024-12-04 14:17:06.179577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.186306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.186392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:04.908 [2024-12-04 14:17:06.186457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.692 ms 00:16:04.908 [2024-12-04 14:17:06.186475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.186588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.186608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:04.908 [2024-12-04 14:17:06.186701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:04.908 [2024-12-04 14:17:06.186719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.194666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.194742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:04.908 [2024-12-04 14:17:06.194816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:16:04.908 [2024-12-04 14:17:06.194833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.201855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.201931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:04.908 [2024-12-04 14:17:06.201977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.945 ms 00:16:04.908 [2024-12-04 14:17:06.201993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.209235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.209310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:04.908 [2024-12-04 14:17:06.209351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.200 ms 00:16:04.908 [2024-12-04 14:17:06.209391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.216520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.908 [2024-12-04 14:17:06.216598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:04.908 [2024-12-04 14:17:06.216638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.070 ms 00:16:04.908 [2024-12-04 14:17:06.216654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.908 [2024-12-04 14:17:06.216694] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:04.908 [2024-12-04 14:17:06.216758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.216788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.216810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.216833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.216879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.216907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:04.908 [2024-12-04 14:17:06.217895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.217917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.217940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.217989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.218992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.219967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.220002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:04.909 [2024-12-04 14:17:06.220030] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:04.909 [2024-12-04 14:17:06.220048] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:16:04.909 [2024-12-04 14:17:06.220070] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:04.909 [2024-12-04 14:17:06.220095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:04.909 [2024-12-04 14:17:06.220112] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:04.909 [2024-12-04 14:17:06.220174] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:04.909 [2024-12-04 14:17:06.220190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:04.909 [2024-12-04 14:17:06.220206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:04.909 [2024-12-04 14:17:06.220221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:04.909 [2024-12-04 14:17:06.220236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:04.909 [2024-12-04 14:17:06.220249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:04.909 [2024-12-04 14:17:06.220314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.909 [2024-12-04 14:17:06.220322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:04.909 [2024-12-04 14:17:06.220330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:16:04.909 [2024-12-04 14:17:06.220337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.909 [2024-12-04 14:17:06.229997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.909 [2024-12-04 14:17:06.230073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:04.909 [2024-12-04 14:17:06.230159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.640 ms 00:16:04.909 [2024-12-04 14:17:06.230177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.909 [2024-12-04 14:17:06.230377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.909 [2024-12-04 14:17:06.230429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:04.909 [2024-12-04 14:17:06.230471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:16:04.909 [2024-12-04 14:17:06.230487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.909 [2024-12-04 14:17:06.265474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.909 [2024-12-04 14:17:06.265556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:04.909 [2024-12-04 14:17:06.265595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.909 [2024-12-04 14:17:06.265611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.909 [2024-12-04 14:17:06.265681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.265698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:04.910 [2024-12-04 14:17:06.265717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.265730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.265771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.265788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:04.910 [2024-12-04 14:17:06.265807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.265854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.265883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.265898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:04.910 [2024-12-04 14:17:06.265914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.265930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.326032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.326171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:04.910 [2024-12-04 14:17:06.326220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.326240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.348256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:04.910 [2024-12-04 14:17:06.348299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.348316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.348384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:04.910 [2024-12-04 14:17:06.348401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.348416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.348462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:04.910 [2024-12-04 14:17:06.348479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.348515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.348647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:04.910 [2024-12-04 14:17:06.348688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.348704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.348787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:04.910 [2024-12-04 14:17:06.348806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.348820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.348898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:04.910 [2024-12-04 14:17:06.348916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.348931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.348975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:04.910 [2024-12-04 14:17:06.349107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:04.910 [2024-12-04 14:17:06.349130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:04.910 [2024-12-04 14:17:06.349144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.910 [2024-12-04 14:17:06.349265] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 188.896 ms, result 0 00:16:05.845 14:17:06 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:05.845 [2024-12-04 14:17:07.037459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:05.845 [2024-12-04 14:17:07.037725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72288 ] 00:16:05.845 [2024-12-04 14:17:07.183909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.104 [2024-12-04 14:17:07.324437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.104 [2024-12-04 14:17:07.528545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:06.105 [2024-12-04 14:17:07.528596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:06.365 [2024-12-04 14:17:07.671922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.365 [2024-12-04 14:17:07.671964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:06.365 [2024-12-04 14:17:07.671974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.365 [2024-12-04 14:17:07.671980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.365 [2024-12-04 14:17:07.674005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.365 [2024-12-04 14:17:07.674037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:06.365 [2024-12-04 14:17:07.674045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:16:06.365 [2024-12-04 14:17:07.674050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.365 [2024-12-04 14:17:07.674242] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:06.365 [2024-12-04 14:17:07.674797] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:06.365 [2024-12-04 14:17:07.674812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.365 [2024-12-04 14:17:07.674817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:06.365 [2024-12-04 14:17:07.674824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:16:06.365 [2024-12-04 14:17:07.674829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.365 [2024-12-04 14:17:07.675791] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:06.365 [2024-12-04 14:17:07.685413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.365 [2024-12-04 14:17:07.685532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:06.365 [2024-12-04 14:17:07.685546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.624 ms 00:16:06.365 [2024-12-04 14:17:07.685551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.365 [2024-12-04 14:17:07.685610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.365 [2024-12-04 14:17:07.685618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:06.366 [2024-12-04 14:17:07.685624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:06.366 [2024-12-04 14:17:07.685630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.689930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.689956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:06.366 [2024-12-04 14:17:07.689963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:16:06.366 [2024-12-04 14:17:07.689972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.690057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.690065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:06.366 [2024-12-04 14:17:07.690071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:06.366 [2024-12-04 14:17:07.690077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.690117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.690124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:06.366 [2024-12-04 14:17:07.690130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:06.366 [2024-12-04 14:17:07.690136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.690158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:06.366 [2024-12-04 14:17:07.692890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.692988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:06.366 [2024-12-04 14:17:07.693000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:16:06.366 [2024-12-04 14:17:07.693008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.693039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.693049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:06.366 [2024-12-04 14:17:07.693055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:06.366 [2024-12-04 14:17:07.693060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.693074] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:06.366 [2024-12-04 14:17:07.693102] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:06.366 [2024-12-04 14:17:07.693128] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:06.366 [2024-12-04 14:17:07.693141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:06.366 [2024-12-04 14:17:07.693198] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:06.366 [2024-12-04 14:17:07.693206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:06.366 [2024-12-04 14:17:07.693213] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:06.366 [2024-12-04 14:17:07.693221] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693227] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693233] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:06.366 [2024-12-04 14:17:07.693238] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:06.366 [2024-12-04 14:17:07.693244] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:06.366 [2024-12-04 14:17:07.693251] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:06.366 [2024-12-04 14:17:07.693257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.693262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:06.366 [2024-12-04 14:17:07.693268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:16:06.366 [2024-12-04 14:17:07.693273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.693321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.366 [2024-12-04 14:17:07.693328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:06.366 [2024-12-04 14:17:07.693333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:06.366 [2024-12-04 14:17:07.693339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.366 [2024-12-04 14:17:07.693393] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:06.366 [2024-12-04 14:17:07.693400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:06.366 [2024-12-04 14:17:07.693406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:06.366 [2024-12-04 14:17:07.693422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:06.366 [2024-12-04 14:17:07.693438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.366 [2024-12-04 14:17:07.693448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:06.366 [2024-12-04 14:17:07.693453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:06.366 [2024-12-04 14:17:07.693458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.366 [2024-12-04 14:17:07.693464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:06.366 [2024-12-04 14:17:07.693473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:06.366 [2024-12-04 14:17:07.693478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693484] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:06.366 [2024-12-04 14:17:07.693488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:06.366 [2024-12-04 14:17:07.693494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:06.366 [2024-12-04 14:17:07.693503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:06.366 [2024-12-04 14:17:07.693509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:06.366 [2024-12-04 14:17:07.693518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693528] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:06.366 [2024-12-04 14:17:07.693533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:06.366 [2024-12-04 14:17:07.693548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:06.366 [2024-12-04 14:17:07.693562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:06.366 [2024-12-04 14:17:07.693577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.366 [2024-12-04 14:17:07.693587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:06.366 [2024-12-04 14:17:07.693592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:06.366 [2024-12-04 14:17:07.693596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.366 [2024-12-04 14:17:07.693601] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:06.366 [2024-12-04 14:17:07.693606] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:06.366 [2024-12-04 14:17:07.693612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.366 [2024-12-04 14:17:07.693626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:06.366 [2024-12-04 14:17:07.693632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:06.366 [2024-12-04 14:17:07.693637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:06.366 [2024-12-04 14:17:07.693642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:06.366 [2024-12-04 14:17:07.693647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:06.366 [2024-12-04 14:17:07.693652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:06.366 [2024-12-04 14:17:07.693658] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:06.366 [2024-12-04 14:17:07.693665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.366 [2024-12-04 14:17:07.693671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:06.366 [2024-12-04 14:17:07.693677] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:06.366 [2024-12-04 14:17:07.693682] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:06.366 [2024-12-04 14:17:07.693687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:06.367 [2024-12-04 14:17:07.693693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:06.367 [2024-12-04 14:17:07.693698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:06.367 [2024-12-04 14:17:07.693703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:06.367 [2024-12-04 14:17:07.693709] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:06.367 [2024-12-04 14:17:07.693714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:06.367 [2024-12-04 14:17:07.693719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:06.367 [2024-12-04 14:17:07.693725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:06.367 [2024-12-04 14:17:07.693731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:06.367 [2024-12-04 14:17:07.693736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:06.367 [2024-12-04 14:17:07.693741] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:06.367 [2024-12-04 14:17:07.693749] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.367 [2024-12-04 14:17:07.693755] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:06.367 [2024-12-04 14:17:07.693760] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:06.367 [2024-12-04 14:17:07.693765] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:06.367 [2024-12-04 14:17:07.693771] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:06.367 [2024-12-04 14:17:07.693777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.693782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:06.367 [2024-12-04 14:17:07.693788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:16:06.367 [2024-12-04 14:17:07.693793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.705856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.705944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:06.367 [2024-12-04 14:17:07.705984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.032 ms 00:16:06.367 [2024-12-04 14:17:07.706000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.706121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.706143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:06.367 [2024-12-04 14:17:07.706206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:06.367 [2024-12-04 14:17:07.706223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.746783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.746888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:06.367 [2024-12-04 14:17:07.746933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.534 ms 00:16:06.367 [2024-12-04 14:17:07.746952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.747016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.747036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:06.367 [2024-12-04 14:17:07.747056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:06.367 [2024-12-04 14:17:07.747071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.747370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.747401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:06.367 [2024-12-04 14:17:07.747417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:16:06.367 [2024-12-04 14:17:07.747431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.747532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.747549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:06.367 [2024-12-04 14:17:07.747564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:06.367 [2024-12-04 14:17:07.747577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.758910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.758995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:06.367 [2024-12-04 14:17:07.759052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.265 ms 00:16:06.367 [2024-12-04 14:17:07.759075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.768797] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:06.367 [2024-12-04 14:17:07.768898] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:06.367 [2024-12-04 14:17:07.768943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.768958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:06.367 [2024-12-04 14:17:07.768973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.765 ms 00:16:06.367 [2024-12-04 14:17:07.768987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.787525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.787617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:06.367 [2024-12-04 14:17:07.787656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.491 ms 00:16:06.367 [2024-12-04 14:17:07.787672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.796670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.796755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:06.367 [2024-12-04 14:17:07.796801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.943 ms 00:16:06.367 [2024-12-04 14:17:07.796817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.805482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.805569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:06.367 [2024-12-04 14:17:07.805608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.603 ms 00:16:06.367 [2024-12-04 14:17:07.805624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.367 [2024-12-04 14:17:07.805897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.367 [2024-12-04 14:17:07.805920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:06.367 [2024-12-04 14:17:07.805975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:16:06.367 [2024-12-04 14:17:07.805995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.851356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.851461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:06.626 [2024-12-04 14:17:07.851515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.333 ms 00:16:06.626 [2024-12-04 14:17:07.851539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.859693] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:06.626 [2024-12-04 14:17:07.871137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.871237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:06.626 [2024-12-04 14:17:07.871280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.326 ms 00:16:06.626 [2024-12-04 14:17:07.871304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.871367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.871595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:06.626 [2024-12-04 14:17:07.871634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.626 [2024-12-04 14:17:07.871650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.871703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.871721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:06.626 [2024-12-04 14:17:07.871736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:06.626 [2024-12-04 14:17:07.871750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.872706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.872789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:06.626 [2024-12-04 14:17:07.872826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:16:06.626 [2024-12-04 14:17:07.872842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.872875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.872895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:06.626 [2024-12-04 14:17:07.872910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.626 [2024-12-04 14:17:07.872948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.872989] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:06.626 [2024-12-04 14:17:07.873007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.873021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:06.626 [2024-12-04 14:17:07.873035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:06.626 [2024-12-04 14:17:07.873069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.891205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.891293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:06.626 [2024-12-04 14:17:07.891335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.092 ms 00:16:06.626 [2024-12-04 14:17:07.891351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.891423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.626 [2024-12-04 14:17:07.891444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:06.626 [2024-12-04 14:17:07.891477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:06.626 [2024-12-04 14:17:07.891494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.626 [2024-12-04 14:17:07.892130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:06.626 [2024-12-04 14:17:07.894604] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.975 ms, result 0 00:16:06.626 [2024-12-04 14:17:07.895278] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:06.626 [2024-12-04 14:17:07.910378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:07.573  [2024-12-04T14:17:09.984Z] Copying: 25/256 [MB] (25 MBps) [2024-12-04T14:17:11.370Z] Copying: 42/256 [MB] (16 MBps) [2024-12-04T14:17:12.316Z] Copying: 63/256 [MB] (21 MBps) [2024-12-04T14:17:13.264Z] Copying: 94/256 [MB] (30 MBps) [2024-12-04T14:17:14.209Z] Copying: 112/256 [MB] (18 MBps) [2024-12-04T14:17:15.155Z] Copying: 131/256 [MB] (18 MBps) [2024-12-04T14:17:16.192Z] Copying: 150/256 [MB] (19 MBps) [2024-12-04T14:17:17.138Z] Copying: 171/256 [MB] (20 MBps) [2024-12-04T14:17:18.083Z] Copying: 189/256 [MB] (18 MBps) [2024-12-04T14:17:19.027Z] Copying: 206/256 [MB] (16 MBps) [2024-12-04T14:17:19.972Z] Copying: 230/256 [MB] (24 MBps) [2024-12-04T14:17:19.972Z] Copying: 255/256 [MB] (24 MBps) [2024-12-04T14:17:19.972Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-04 14:17:19.965168] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:18.769 [2024-12-04 14:17:19.974781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:19.974924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:18.769 [2024-12-04 14:17:19.974943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:18.769 [2024-12-04 14:17:19.974951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.769 [2024-12-04 14:17:19.974976] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:18.769 [2024-12-04 14:17:19.977834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:19.977863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:18.769 [2024-12-04 14:17:19.977873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:16:18.769 [2024-12-04 14:17:19.977881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.769 [2024-12-04 14:17:19.978182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:19.978194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:18.769 [2024-12-04 14:17:19.978202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:18.769 [2024-12-04 14:17:19.978213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.769 [2024-12-04 14:17:19.981903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:19.982007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:18.769 [2024-12-04 14:17:19.982021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:16:18.769 [2024-12-04 14:17:19.982029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.769 [2024-12-04 14:17:19.988891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:19.988992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:18.769 [2024-12-04 14:17:19.989008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.832 ms 00:16:18.769 [2024-12-04 14:17:19.989016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.769 [2024-12-04 14:17:20.013363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:20.013399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:18.769 [2024-12-04 14:17:20.013410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.282 ms 00:16:18.769 [2024-12-04 14:17:20.013417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.769 [2024-12-04 14:17:20.027462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.769 [2024-12-04 14:17:20.027585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:18.769 [2024-12-04 14:17:20.027601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.999 ms 00:16:18.770 [2024-12-04 14:17:20.027608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.770 [2024-12-04 14:17:20.027750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.770 [2024-12-04 14:17:20.027760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:18.770 [2024-12-04 14:17:20.027767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:18.770 [2024-12-04 14:17:20.027774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.770 [2024-12-04 14:17:20.052181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.770 [2024-12-04 14:17:20.052211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:18.770 [2024-12-04 14:17:20.052221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.391 ms 00:16:18.770 [2024-12-04 14:17:20.052228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.770 [2024-12-04 14:17:20.075906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.770 [2024-12-04 14:17:20.075936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:18.770 [2024-12-04 14:17:20.075946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.635 ms 00:16:18.770 [2024-12-04 14:17:20.075953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.770 [2024-12-04 14:17:20.099063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.770 [2024-12-04 14:17:20.099182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:18.770 [2024-12-04 14:17:20.099198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.066 ms 00:16:18.770 [2024-12-04 14:17:20.099205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.770 [2024-12-04 14:17:20.122422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.770 [2024-12-04 14:17:20.122525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:18.770 [2024-12-04 14:17:20.122539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.153 ms 00:16:18.770 [2024-12-04 14:17:20.122546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.770 [2024-12-04 14:17:20.122584] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:18.770 [2024-12-04 14:17:20.122598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.122995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:18.770 [2024-12-04 14:17:20.123123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:18.771 [2024-12-04 14:17:20.123366] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:18.771 [2024-12-04 14:17:20.123374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7677b12-c522-4546-9c0d-e96917bc5b1d 00:16:18.771 [2024-12-04 14:17:20.123392] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:18.771 [2024-12-04 14:17:20.123399] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:18.771 [2024-12-04 14:17:20.123406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:18.771 [2024-12-04 14:17:20.123413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:18.771 [2024-12-04 14:17:20.123420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:18.771 [2024-12-04 14:17:20.123430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:18.771 [2024-12-04 14:17:20.123436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:18.771 [2024-12-04 14:17:20.123443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:18.771 [2024-12-04 14:17:20.123449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:18.771 [2024-12-04 14:17:20.123456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.771 [2024-12-04 14:17:20.123463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:18.771 [2024-12-04 14:17:20.123471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:16:18.771 [2024-12-04 14:17:20.123478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.771 [2024-12-04 14:17:20.135681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.771 [2024-12-04 14:17:20.135707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:18.771 [2024-12-04 14:17:20.135720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.173 ms 00:16:18.771 [2024-12-04 14:17:20.135728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.771 [2024-12-04 14:17:20.135932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.771 [2024-12-04 14:17:20.135946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:18.771 [2024-12-04 14:17:20.135954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:18.771 [2024-12-04 14:17:20.135960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.771 [2024-12-04 14:17:20.173303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.771 [2024-12-04 14:17:20.173421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:18.771 [2024-12-04 14:17:20.173440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.771 [2024-12-04 14:17:20.173448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.771 [2024-12-04 14:17:20.173526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.771 [2024-12-04 14:17:20.173535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:18.771 [2024-12-04 14:17:20.173542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.771 [2024-12-04 14:17:20.173549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.771 [2024-12-04 14:17:20.173588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.771 [2024-12-04 14:17:20.173596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:18.771 [2024-12-04 14:17:20.173604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.771 [2024-12-04 14:17:20.173614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.771 [2024-12-04 14:17:20.173631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:18.771 [2024-12-04 14:17:20.173638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:18.771 [2024-12-04 14:17:20.173645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:18.771 [2024-12-04 14:17:20.173652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.032 [2024-12-04 14:17:20.247174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.032 [2024-12-04 14:17:20.247210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.032 [2024-12-04 14:17:20.247228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.032 [2024-12-04 14:17:20.247236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.032 [2024-12-04 14:17:20.276834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.032 [2024-12-04 14:17:20.276950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.032 [2024-12-04 14:17:20.276964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.032 [2024-12-04 14:17:20.276972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.032 [2024-12-04 14:17:20.277023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.032 [2024-12-04 14:17:20.277032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.032 [2024-12-04 14:17:20.277039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.032 [2024-12-04 14:17:20.277046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.032 [2024-12-04 14:17:20.277079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.032 [2024-12-04 14:17:20.277102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.032 [2024-12-04 14:17:20.277111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.032 [2024-12-04 14:17:20.277118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.032 [2024-12-04 14:17:20.277205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.032 [2024-12-04 14:17:20.277214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.032 [2024-12-04 14:17:20.277222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.032 [2024-12-04 14:17:20.277229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.033 [2024-12-04 14:17:20.277259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.033 [2024-12-04 14:17:20.277267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:19.033 [2024-12-04 14:17:20.277274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.033 [2024-12-04 14:17:20.277282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.033 [2024-12-04 14:17:20.277315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.033 [2024-12-04 14:17:20.277324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.033 [2024-12-04 14:17:20.277331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.033 [2024-12-04 14:17:20.277338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.033 [2024-12-04 14:17:20.277380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.033 [2024-12-04 14:17:20.277391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.033 [2024-12-04 14:17:20.277399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.033 [2024-12-04 14:17:20.277406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.033 [2024-12-04 14:17:20.277532] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.755 ms, result 0 00:16:19.981 00:16:19.981 00:16:19.981 14:17:21 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:20.243 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:20.243 14:17:21 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:20.243 14:17:21 -- ftl/trim.sh@109 -- # fio_kill 00:16:20.243 14:17:21 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:20.243 14:17:21 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:20.243 14:17:21 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:20.243 14:17:21 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:20.504 Process with pid 72233 is not found 00:16:20.504 14:17:21 -- ftl/trim.sh@20 -- # killprocess 72233 00:16:20.504 14:17:21 -- common/autotest_common.sh@936 -- # '[' -z 72233 ']' 00:16:20.504 14:17:21 -- common/autotest_common.sh@940 -- # kill -0 72233 00:16:20.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72233) - No such process 00:16:20.504 14:17:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72233 is not found' 00:16:20.504 ************************************ 00:16:20.504 END TEST ftl_trim 00:16:20.504 ************************************ 00:16:20.504 00:16:20.504 real 1m9.661s 00:16:20.504 user 1m26.047s 00:16:20.504 sys 0m14.156s 00:16:20.504 14:17:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:20.504 14:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:20.504 14:17:21 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:16:20.504 14:17:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:20.504 14:17:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.504 14:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:20.504 ************************************ 00:16:20.504 START TEST ftl_restore 00:16:20.504 ************************************ 00:16:20.504 14:17:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:16:20.504 * Looking for test storage... 00:16:20.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.504 14:17:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:20.504 14:17:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:20.504 14:17:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:20.504 14:17:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:20.504 14:17:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:20.504 14:17:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:20.505 14:17:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:20.505 14:17:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:20.505 14:17:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:20.505 14:17:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.505 14:17:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:20.505 14:17:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:20.505 14:17:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:20.505 14:17:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:20.505 14:17:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:20.505 14:17:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:20.505 14:17:21 -- scripts/common.sh@344 -- # : 1 00:16:20.505 14:17:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:20.505 14:17:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.505 14:17:21 -- scripts/common.sh@364 -- # decimal 1 00:16:20.505 14:17:21 -- scripts/common.sh@352 -- # local d=1 00:16:20.505 14:17:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.505 14:17:21 -- scripts/common.sh@354 -- # echo 1 00:16:20.505 14:17:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:20.505 14:17:21 -- scripts/common.sh@365 -- # decimal 2 00:16:20.505 14:17:21 -- scripts/common.sh@352 -- # local d=2 00:16:20.505 14:17:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.505 14:17:21 -- scripts/common.sh@354 -- # echo 2 00:16:20.505 14:17:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:20.505 14:17:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:20.505 14:17:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:20.505 14:17:21 -- scripts/common.sh@367 -- # return 0 00:16:20.505 14:17:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.505 14:17:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.505 --rc genhtml_branch_coverage=1 00:16:20.505 --rc genhtml_function_coverage=1 00:16:20.505 --rc genhtml_legend=1 00:16:20.505 --rc geninfo_all_blocks=1 00:16:20.505 --rc geninfo_unexecuted_blocks=1 00:16:20.505 00:16:20.505 ' 00:16:20.505 14:17:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.505 --rc genhtml_branch_coverage=1 00:16:20.505 --rc genhtml_function_coverage=1 00:16:20.505 --rc genhtml_legend=1 00:16:20.505 --rc geninfo_all_blocks=1 00:16:20.505 --rc geninfo_unexecuted_blocks=1 00:16:20.505 00:16:20.505 ' 00:16:20.505 14:17:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.505 --rc genhtml_branch_coverage=1 00:16:20.505 --rc genhtml_function_coverage=1 00:16:20.505 --rc genhtml_legend=1 00:16:20.505 --rc geninfo_all_blocks=1 00:16:20.505 --rc geninfo_unexecuted_blocks=1 00:16:20.505 00:16:20.505 ' 00:16:20.505 14:17:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:20.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.505 --rc genhtml_branch_coverage=1 00:16:20.505 --rc genhtml_function_coverage=1 00:16:20.505 --rc genhtml_legend=1 00:16:20.505 --rc geninfo_all_blocks=1 00:16:20.505 --rc geninfo_unexecuted_blocks=1 00:16:20.505 00:16:20.505 ' 00:16:20.505 14:17:21 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.505 14:17:21 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:20.505 14:17:21 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.505 14:17:21 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.505 14:17:21 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.505 14:17:21 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.505 14:17:21 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.505 14:17:21 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.505 14:17:21 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.505 14:17:21 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.505 14:17:21 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.505 14:17:21 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.505 14:17:21 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.505 14:17:21 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.505 14:17:21 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.505 14:17:21 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.505 14:17:21 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.505 14:17:21 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.505 14:17:21 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.505 14:17:21 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.505 14:17:21 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.505 14:17:21 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.505 14:17:21 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.505 14:17:21 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.505 14:17:21 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.505 14:17:21 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.505 14:17:21 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.505 14:17:21 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.505 14:17:21 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.505 14:17:21 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.505 14:17:21 -- ftl/restore.sh@13 -- # mktemp -d 00:16:20.505 14:17:21 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.XNBYHFUmGW 00:16:20.505 14:17:21 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:20.505 14:17:21 -- ftl/restore.sh@16 -- # case $opt in 00:16:20.505 14:17:21 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:16:20.505 14:17:21 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:20.505 14:17:21 -- ftl/restore.sh@23 -- # shift 2 00:16:20.505 14:17:21 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:16:20.505 14:17:21 -- ftl/restore.sh@25 -- # timeout=240 00:16:20.505 14:17:21 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:20.505 14:17:21 -- ftl/restore.sh@39 -- # svcpid=72518 00:16:20.505 14:17:21 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.505 14:17:21 -- ftl/restore.sh@41 -- # waitforlisten 72518 00:16:20.505 14:17:21 -- common/autotest_common.sh@829 -- # '[' -z 72518 ']' 00:16:20.505 14:17:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.505 14:17:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.505 14:17:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.505 14:17:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.505 14:17:21 -- common/autotest_common.sh@10 -- # set +x 00:16:20.767 [2024-12-04 14:17:22.029689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.767 [2024-12-04 14:17:22.029946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72518 ] 00:16:20.767 [2024-12-04 14:17:22.179108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.028 [2024-12-04 14:17:22.354553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:21.028 [2024-12-04 14:17:22.354887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.416 14:17:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.417 14:17:23 -- common/autotest_common.sh@862 -- # return 0 00:16:22.417 14:17:23 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:22.417 14:17:23 -- ftl/common.sh@54 -- # local name=nvme0 00:16:22.417 14:17:23 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:22.417 14:17:23 -- ftl/common.sh@56 -- # local size=103424 00:16:22.417 14:17:23 -- ftl/common.sh@59 -- # local base_bdev 00:16:22.417 14:17:23 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:22.417 14:17:23 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:22.417 14:17:23 -- ftl/common.sh@62 -- # local base_size 00:16:22.417 14:17:23 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:22.417 14:17:23 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:22.417 14:17:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:22.417 14:17:23 -- common/autotest_common.sh@1369 -- # local bs 00:16:22.417 14:17:23 -- common/autotest_common.sh@1370 -- # local nb 00:16:22.417 14:17:23 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:22.680 14:17:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:22.680 { 00:16:22.680 "name": "nvme0n1", 00:16:22.680 "aliases": [ 00:16:22.680 "32c3e0e6-79ef-4648-bbc2-1d0a62576aef" 00:16:22.680 ], 00:16:22.680 "product_name": "NVMe disk", 00:16:22.680 "block_size": 4096, 00:16:22.680 "num_blocks": 1310720, 00:16:22.680 "uuid": "32c3e0e6-79ef-4648-bbc2-1d0a62576aef", 00:16:22.680 "assigned_rate_limits": { 00:16:22.680 "rw_ios_per_sec": 0, 00:16:22.680 "rw_mbytes_per_sec": 0, 00:16:22.680 "r_mbytes_per_sec": 0, 00:16:22.680 "w_mbytes_per_sec": 0 00:16:22.680 }, 00:16:22.680 "claimed": true, 00:16:22.680 "claim_type": "read_many_write_one", 00:16:22.680 "zoned": false, 00:16:22.680 "supported_io_types": { 00:16:22.680 "read": true, 00:16:22.680 "write": true, 00:16:22.680 "unmap": true, 00:16:22.680 "write_zeroes": true, 00:16:22.680 "flush": true, 00:16:22.680 "reset": true, 00:16:22.680 "compare": true, 00:16:22.680 "compare_and_write": false, 00:16:22.680 "abort": true, 00:16:22.680 "nvme_admin": true, 00:16:22.680 "nvme_io": true 00:16:22.680 }, 00:16:22.680 "driver_specific": { 00:16:22.680 "nvme": [ 00:16:22.680 { 00:16:22.680 "pci_address": "0000:00:07.0", 00:16:22.680 "trid": { 00:16:22.680 "trtype": "PCIe", 00:16:22.680 "traddr": "0000:00:07.0" 00:16:22.680 }, 00:16:22.680 "ctrlr_data": { 00:16:22.680 "cntlid": 0, 00:16:22.680 "vendor_id": "0x1b36", 00:16:22.680 "model_number": "QEMU NVMe Ctrl", 00:16:22.680 "serial_number": "12341", 00:16:22.680 "firmware_revision": "8.0.0", 00:16:22.680 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:22.680 "oacs": { 00:16:22.680 "security": 0, 00:16:22.680 "format": 1, 00:16:22.680 "firmware": 0, 00:16:22.680 "ns_manage": 1 00:16:22.680 }, 00:16:22.680 "multi_ctrlr": false, 00:16:22.680 "ana_reporting": false 00:16:22.680 }, 00:16:22.680 "vs": { 00:16:22.680 "nvme_version": "1.4" 00:16:22.680 }, 00:16:22.680 "ns_data": { 00:16:22.680 "id": 1, 00:16:22.680 "can_share": false 00:16:22.680 } 00:16:22.680 } 00:16:22.680 ], 00:16:22.680 "mp_policy": "active_passive" 00:16:22.680 } 00:16:22.680 } 00:16:22.680 ]' 00:16:22.680 14:17:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:22.680 14:17:24 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:22.680 14:17:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:22.680 14:17:24 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:22.680 14:17:24 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:22.680 14:17:24 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:22.680 14:17:24 -- ftl/common.sh@63 -- # base_size=5120 00:16:22.680 14:17:24 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:22.680 14:17:24 -- ftl/common.sh@67 -- # clear_lvols 00:16:22.680 14:17:24 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:22.680 14:17:24 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:22.941 14:17:24 -- ftl/common.sh@28 -- # stores=5717eb30-7b1e-49a7-a4b0-780e97f006bd 00:16:22.941 14:17:24 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:22.941 14:17:24 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5717eb30-7b1e-49a7-a4b0-780e97f006bd 00:16:23.202 14:17:24 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:23.461 14:17:24 -- ftl/common.sh@68 -- # lvs=7c5b0691-1bdf-4874-8f45-d6da68affdf2 00:16:23.461 14:17:24 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7c5b0691-1bdf-4874-8f45-d6da68affdf2 00:16:23.461 14:17:24 -- ftl/restore.sh@43 -- # split_bdev=ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.461 14:17:24 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:16:23.461 14:17:24 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.461 14:17:24 -- ftl/common.sh@35 -- # local name=nvc0 00:16:23.461 14:17:24 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:23.461 14:17:24 -- ftl/common.sh@37 -- # local base_bdev=ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.461 14:17:24 -- ftl/common.sh@38 -- # local cache_size= 00:16:23.461 14:17:24 -- ftl/common.sh@41 -- # get_bdev_size ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.461 14:17:24 -- common/autotest_common.sh@1367 -- # local bdev_name=ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.461 14:17:24 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:23.461 14:17:24 -- common/autotest_common.sh@1369 -- # local bs 00:16:23.461 14:17:24 -- common/autotest_common.sh@1370 -- # local nb 00:16:23.720 14:17:24 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.720 14:17:25 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:23.720 { 00:16:23.720 "name": "ba3cf722-6b57-42f4-8668-e1033cdab78f", 00:16:23.720 "aliases": [ 00:16:23.720 "lvs/nvme0n1p0" 00:16:23.720 ], 00:16:23.720 "product_name": "Logical Volume", 00:16:23.720 "block_size": 4096, 00:16:23.720 "num_blocks": 26476544, 00:16:23.720 "uuid": "ba3cf722-6b57-42f4-8668-e1033cdab78f", 00:16:23.720 "assigned_rate_limits": { 00:16:23.720 "rw_ios_per_sec": 0, 00:16:23.720 "rw_mbytes_per_sec": 0, 00:16:23.720 "r_mbytes_per_sec": 0, 00:16:23.720 "w_mbytes_per_sec": 0 00:16:23.720 }, 00:16:23.720 "claimed": false, 00:16:23.720 "zoned": false, 00:16:23.720 "supported_io_types": { 00:16:23.720 "read": true, 00:16:23.720 "write": true, 00:16:23.720 "unmap": true, 00:16:23.720 "write_zeroes": true, 00:16:23.720 "flush": false, 00:16:23.720 "reset": true, 00:16:23.720 "compare": false, 00:16:23.720 "compare_and_write": false, 00:16:23.720 "abort": false, 00:16:23.720 "nvme_admin": false, 00:16:23.720 "nvme_io": false 00:16:23.720 }, 00:16:23.720 "driver_specific": { 00:16:23.720 "lvol": { 00:16:23.720 "lvol_store_uuid": "7c5b0691-1bdf-4874-8f45-d6da68affdf2", 00:16:23.720 "base_bdev": "nvme0n1", 00:16:23.720 "thin_provision": true, 00:16:23.720 "snapshot": false, 00:16:23.720 "clone": false, 00:16:23.720 "esnap_clone": false 00:16:23.720 } 00:16:23.720 } 00:16:23.720 } 00:16:23.720 ]' 00:16:23.720 14:17:25 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:23.720 14:17:25 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:23.720 14:17:25 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:23.978 14:17:25 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:23.978 14:17:25 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:23.978 14:17:25 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:23.978 14:17:25 -- ftl/common.sh@41 -- # local base_size=5171 00:16:23.978 14:17:25 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:23.978 14:17:25 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:23.978 14:17:25 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:23.978 14:17:25 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:23.978 14:17:25 -- ftl/common.sh@48 -- # get_bdev_size ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.978 14:17:25 -- common/autotest_common.sh@1367 -- # local bdev_name=ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:23.978 14:17:25 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:23.978 14:17:25 -- common/autotest_common.sh@1369 -- # local bs 00:16:23.978 14:17:25 -- common/autotest_common.sh@1370 -- # local nb 00:16:23.978 14:17:25 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:24.235 14:17:25 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:24.235 { 00:16:24.235 "name": "ba3cf722-6b57-42f4-8668-e1033cdab78f", 00:16:24.235 "aliases": [ 00:16:24.235 "lvs/nvme0n1p0" 00:16:24.235 ], 00:16:24.235 "product_name": "Logical Volume", 00:16:24.235 "block_size": 4096, 00:16:24.235 "num_blocks": 26476544, 00:16:24.235 "uuid": "ba3cf722-6b57-42f4-8668-e1033cdab78f", 00:16:24.235 "assigned_rate_limits": { 00:16:24.235 "rw_ios_per_sec": 0, 00:16:24.235 "rw_mbytes_per_sec": 0, 00:16:24.235 "r_mbytes_per_sec": 0, 00:16:24.235 "w_mbytes_per_sec": 0 00:16:24.235 }, 00:16:24.235 "claimed": false, 00:16:24.235 "zoned": false, 00:16:24.235 "supported_io_types": { 00:16:24.235 "read": true, 00:16:24.235 "write": true, 00:16:24.235 "unmap": true, 00:16:24.235 "write_zeroes": true, 00:16:24.235 "flush": false, 00:16:24.235 "reset": true, 00:16:24.235 "compare": false, 00:16:24.235 "compare_and_write": false, 00:16:24.235 "abort": false, 00:16:24.235 "nvme_admin": false, 00:16:24.235 "nvme_io": false 00:16:24.235 }, 00:16:24.235 "driver_specific": { 00:16:24.235 "lvol": { 00:16:24.235 "lvol_store_uuid": "7c5b0691-1bdf-4874-8f45-d6da68affdf2", 00:16:24.235 "base_bdev": "nvme0n1", 00:16:24.235 "thin_provision": true, 00:16:24.235 "snapshot": false, 00:16:24.235 "clone": false, 00:16:24.235 "esnap_clone": false 00:16:24.235 } 00:16:24.235 } 00:16:24.235 } 00:16:24.235 ]' 00:16:24.235 14:17:25 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:24.235 14:17:25 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:24.235 14:17:25 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:24.235 14:17:25 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:24.235 14:17:25 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:24.235 14:17:25 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:24.235 14:17:25 -- ftl/common.sh@48 -- # cache_size=5171 00:16:24.235 14:17:25 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:24.493 14:17:25 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:24.493 14:17:25 -- ftl/restore.sh@48 -- # get_bdev_size ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:24.493 14:17:25 -- common/autotest_common.sh@1367 -- # local bdev_name=ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:24.493 14:17:25 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:24.493 14:17:25 -- common/autotest_common.sh@1369 -- # local bs 00:16:24.493 14:17:25 -- common/autotest_common.sh@1370 -- # local nb 00:16:24.493 14:17:25 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba3cf722-6b57-42f4-8668-e1033cdab78f 00:16:24.751 14:17:26 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:24.751 { 00:16:24.751 "name": "ba3cf722-6b57-42f4-8668-e1033cdab78f", 00:16:24.751 "aliases": [ 00:16:24.751 "lvs/nvme0n1p0" 00:16:24.751 ], 00:16:24.751 "product_name": "Logical Volume", 00:16:24.751 "block_size": 4096, 00:16:24.751 "num_blocks": 26476544, 00:16:24.751 "uuid": "ba3cf722-6b57-42f4-8668-e1033cdab78f", 00:16:24.751 "assigned_rate_limits": { 00:16:24.751 "rw_ios_per_sec": 0, 00:16:24.751 "rw_mbytes_per_sec": 0, 00:16:24.751 "r_mbytes_per_sec": 0, 00:16:24.751 "w_mbytes_per_sec": 0 00:16:24.751 }, 00:16:24.751 "claimed": false, 00:16:24.751 "zoned": false, 00:16:24.751 "supported_io_types": { 00:16:24.751 "read": true, 00:16:24.751 "write": true, 00:16:24.751 "unmap": true, 00:16:24.751 "write_zeroes": true, 00:16:24.751 "flush": false, 00:16:24.751 "reset": true, 00:16:24.751 "compare": false, 00:16:24.751 "compare_and_write": false, 00:16:24.751 "abort": false, 00:16:24.751 "nvme_admin": false, 00:16:24.751 "nvme_io": false 00:16:24.751 }, 00:16:24.751 "driver_specific": { 00:16:24.751 "lvol": { 00:16:24.751 "lvol_store_uuid": "7c5b0691-1bdf-4874-8f45-d6da68affdf2", 00:16:24.751 "base_bdev": "nvme0n1", 00:16:24.751 "thin_provision": true, 00:16:24.751 "snapshot": false, 00:16:24.751 "clone": false, 00:16:24.751 "esnap_clone": false 00:16:24.751 } 00:16:24.751 } 00:16:24.751 } 00:16:24.751 ]' 00:16:24.751 14:17:26 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:24.751 14:17:26 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:24.751 14:17:26 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:24.751 14:17:26 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:24.751 14:17:26 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:24.751 14:17:26 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:24.751 14:17:26 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:24.751 14:17:26 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ba3cf722-6b57-42f4-8668-e1033cdab78f --l2p_dram_limit 10' 00:16:24.751 14:17:26 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:24.751 14:17:26 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:16:24.751 14:17:26 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:24.751 14:17:26 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:24.751 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:24.751 14:17:26 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ba3cf722-6b57-42f4-8668-e1033cdab78f --l2p_dram_limit 10 -c nvc0n1p0 00:16:25.010 [2024-12-04 14:17:26.303616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.010 [2024-12-04 14:17:26.303655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:25.010 [2024-12-04 14:17:26.303668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:25.010 [2024-12-04 14:17:26.303676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.010 [2024-12-04 14:17:26.303717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.010 [2024-12-04 14:17:26.303724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.010 [2024-12-04 14:17:26.303732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:25.010 [2024-12-04 14:17:26.303738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.010 [2024-12-04 14:17:26.303753] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:25.011 [2024-12-04 14:17:26.304389] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:25.011 [2024-12-04 14:17:26.304411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.304417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.011 [2024-12-04 14:17:26.304425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:16:25.011 [2024-12-04 14:17:26.304431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.304460] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf 00:16:25.011 [2024-12-04 14:17:26.305412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.305435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:25.011 [2024-12-04 14:17:26.305444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:25.011 [2024-12-04 14:17:26.305450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.310054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.310083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.011 [2024-12-04 14:17:26.310098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:16:25.011 [2024-12-04 14:17:26.310111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.310177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.310186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.011 [2024-12-04 14:17:26.310192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:25.011 [2024-12-04 14:17:26.310201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.310237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.310248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:25.011 [2024-12-04 14:17:26.310254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:25.011 [2024-12-04 14:17:26.310260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.310279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.011 [2024-12-04 14:17:26.313131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.313155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.011 [2024-12-04 14:17:26.313164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:16:25.011 [2024-12-04 14:17:26.313169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.313197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.313203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:25.011 [2024-12-04 14:17:26.313211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:25.011 [2024-12-04 14:17:26.313216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.313236] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:25.011 [2024-12-04 14:17:26.313322] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:25.011 [2024-12-04 14:17:26.313334] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:25.011 [2024-12-04 14:17:26.313342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:25.011 [2024-12-04 14:17:26.313351] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313357] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313366] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:25.011 [2024-12-04 14:17:26.313378] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:25.011 [2024-12-04 14:17:26.313385] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:25.011 [2024-12-04 14:17:26.313390] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:25.011 [2024-12-04 14:17:26.313397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.313403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:25.011 [2024-12-04 14:17:26.313410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:16:25.011 [2024-12-04 14:17:26.313415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.313463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.011 [2024-12-04 14:17:26.313469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:25.011 [2024-12-04 14:17:26.313476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:25.011 [2024-12-04 14:17:26.313483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.011 [2024-12-04 14:17:26.313540] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:25.011 [2024-12-04 14:17:26.313547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:25.011 [2024-12-04 14:17:26.313555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:25.011 [2024-12-04 14:17:26.313572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:25.011 [2024-12-04 14:17:26.313590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:25.011 [2024-12-04 14:17:26.313601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:25.011 [2024-12-04 14:17:26.313607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:25.011 [2024-12-04 14:17:26.313614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:25.011 [2024-12-04 14:17:26.313620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:25.011 [2024-12-04 14:17:26.313627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:25.011 [2024-12-04 14:17:26.313632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:25.011 [2024-12-04 14:17:26.313645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:25.011 [2024-12-04 14:17:26.313651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313655] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:25.011 [2024-12-04 14:17:26.313661] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:25.011 [2024-12-04 14:17:26.313666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:25.011 [2024-12-04 14:17:26.313677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:25.011 [2024-12-04 14:17:26.313695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:25.011 [2024-12-04 14:17:26.313710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:25.011 [2024-12-04 14:17:26.313728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:25.011 [2024-12-04 14:17:26.313743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:25.011 [2024-12-04 14:17:26.313754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:25.011 [2024-12-04 14:17:26.313761] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:25.011 [2024-12-04 14:17:26.313766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:25.011 [2024-12-04 14:17:26.313772] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:25.011 [2024-12-04 14:17:26.313777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:25.011 [2024-12-04 14:17:26.313784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.011 [2024-12-04 14:17:26.313797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:25.011 [2024-12-04 14:17:26.313802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:25.011 [2024-12-04 14:17:26.313809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:25.011 [2024-12-04 14:17:26.313815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:25.011 [2024-12-04 14:17:26.313822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:25.011 [2024-12-04 14:17:26.313827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:25.011 [2024-12-04 14:17:26.313834] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:25.011 [2024-12-04 14:17:26.313841] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:25.011 [2024-12-04 14:17:26.313848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:25.011 [2024-12-04 14:17:26.313853] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:25.012 [2024-12-04 14:17:26.313860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:25.012 [2024-12-04 14:17:26.313865] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:25.012 [2024-12-04 14:17:26.313872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:25.012 [2024-12-04 14:17:26.313877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:25.012 [2024-12-04 14:17:26.313883] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:25.012 [2024-12-04 14:17:26.313889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:25.012 [2024-12-04 14:17:26.313895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:25.012 [2024-12-04 14:17:26.313900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:25.012 [2024-12-04 14:17:26.313907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:25.012 [2024-12-04 14:17:26.313912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:25.012 [2024-12-04 14:17:26.313921] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:25.012 [2024-12-04 14:17:26.313926] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:25.012 [2024-12-04 14:17:26.313933] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:25.012 [2024-12-04 14:17:26.313939] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:25.012 [2024-12-04 14:17:26.313946] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:25.012 [2024-12-04 14:17:26.313951] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:25.012 [2024-12-04 14:17:26.313957] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:25.012 [2024-12-04 14:17:26.313963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.313969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:25.012 [2024-12-04 14:17:26.313975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:16:25.012 [2024-12-04 14:17:26.313982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.325756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.325786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.012 [2024-12-04 14:17:26.325794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.744 ms 00:16:25.012 [2024-12-04 14:17:26.325801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.325868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.325877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:25.012 [2024-12-04 14:17:26.325885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:25.012 [2024-12-04 14:17:26.325891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.349548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.349575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.012 [2024-12-04 14:17:26.349584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.624 ms 00:16:25.012 [2024-12-04 14:17:26.349592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.349614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.349623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.012 [2024-12-04 14:17:26.349629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:25.012 [2024-12-04 14:17:26.349638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.349928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.349942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.012 [2024-12-04 14:17:26.349949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:16:25.012 [2024-12-04 14:17:26.349956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.350041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.350050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.012 [2024-12-04 14:17:26.350055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:25.012 [2024-12-04 14:17:26.350062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.361859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.361885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.012 [2024-12-04 14:17:26.361893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.784 ms 00:16:25.012 [2024-12-04 14:17:26.361900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.370944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:25.012 [2024-12-04 14:17:26.373207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.373229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:25.012 [2024-12-04 14:17:26.373238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.253 ms 00:16:25.012 [2024-12-04 14:17:26.373245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.434055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.012 [2024-12-04 14:17:26.434096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:25.012 [2024-12-04 14:17:26.434113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.788 ms 00:16:25.012 [2024-12-04 14:17:26.434120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.012 [2024-12-04 14:17:26.434153] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:25.012 [2024-12-04 14:17:26.434161] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:28.339 [2024-12-04 14:17:29.300536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.339 [2024-12-04 14:17:29.300596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:28.339 [2024-12-04 14:17:29.300614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2866.365 ms 00:16:28.339 [2024-12-04 14:17:29.300623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.339 [2024-12-04 14:17:29.300811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.339 [2024-12-04 14:17:29.300821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:28.339 [2024-12-04 14:17:29.300834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:28.339 [2024-12-04 14:17:29.300842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.339 [2024-12-04 14:17:29.324519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.339 [2024-12-04 14:17:29.324552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:28.339 [2024-12-04 14:17:29.324566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.632 ms 00:16:28.339 [2024-12-04 14:17:29.324574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.339 [2024-12-04 14:17:29.347759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.339 [2024-12-04 14:17:29.347790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:28.339 [2024-12-04 14:17:29.347806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.149 ms 00:16:28.339 [2024-12-04 14:17:29.347813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.339 [2024-12-04 14:17:29.348133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.339 [2024-12-04 14:17:29.348144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:28.339 [2024-12-04 14:17:29.348155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:28.339 [2024-12-04 14:17:29.348162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.411755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.340 [2024-12-04 14:17:29.411884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:28.340 [2024-12-04 14:17:29.411904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.560 ms 00:16:28.340 [2024-12-04 14:17:29.411913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.436709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.340 [2024-12-04 14:17:29.436742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:28.340 [2024-12-04 14:17:29.436755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.761 ms 00:16:28.340 [2024-12-04 14:17:29.436763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.437949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.340 [2024-12-04 14:17:29.437978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:28.340 [2024-12-04 14:17:29.437991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:16:28.340 [2024-12-04 14:17:29.437998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.462107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.340 [2024-12-04 14:17:29.462137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:28.340 [2024-12-04 14:17:29.462149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.070 ms 00:16:28.340 [2024-12-04 14:17:29.462157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.462200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.340 [2024-12-04 14:17:29.462208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:28.340 [2024-12-04 14:17:29.462218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:28.340 [2024-12-04 14:17:29.462225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.462301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.340 [2024-12-04 14:17:29.462310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:28.340 [2024-12-04 14:17:29.462319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:28.340 [2024-12-04 14:17:29.462327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.340 [2024-12-04 14:17:29.463178] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3159.131 ms, result 0 00:16:28.340 { 00:16:28.340 "name": "ftl0", 00:16:28.340 "uuid": "ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf" 00:16:28.340 } 00:16:28.340 14:17:29 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:16:28.340 14:17:29 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:28.340 14:17:29 -- ftl/restore.sh@63 -- # echo ']}' 00:16:28.340 14:17:29 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:28.616 [2024-12-04 14:17:29.846773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.616 [2024-12-04 14:17:29.846823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:28.616 [2024-12-04 14:17:29.846835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:28.616 [2024-12-04 14:17:29.846844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.616 [2024-12-04 14:17:29.846867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:28.616 [2024-12-04 14:17:29.849340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.616 [2024-12-04 14:17:29.849465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:28.616 [2024-12-04 14:17:29.849486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:16:28.616 [2024-12-04 14:17:29.849499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.616 [2024-12-04 14:17:29.849763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.616 [2024-12-04 14:17:29.849772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:28.616 [2024-12-04 14:17:29.849783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:16:28.616 [2024-12-04 14:17:29.849790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.616 [2024-12-04 14:17:29.853042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.616 [2024-12-04 14:17:29.853138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:28.616 [2024-12-04 14:17:29.853154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:16:28.616 [2024-12-04 14:17:29.853161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.616 [2024-12-04 14:17:29.859260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.616 [2024-12-04 14:17:29.859285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:28.616 [2024-12-04 14:17:29.859296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.074 ms 00:16:28.616 [2024-12-04 14:17:29.859304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.616 [2024-12-04 14:17:29.883739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.616 [2024-12-04 14:17:29.883770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:28.617 [2024-12-04 14:17:29.883783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.365 ms 00:16:28.617 [2024-12-04 14:17:29.883790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.899546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.617 [2024-12-04 14:17:29.899658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:28.617 [2024-12-04 14:17:29.899678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.719 ms 00:16:28.617 [2024-12-04 14:17:29.899686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.899826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.617 [2024-12-04 14:17:29.899837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:28.617 [2024-12-04 14:17:29.899847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:28.617 [2024-12-04 14:17:29.899857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.923950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.617 [2024-12-04 14:17:29.923979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:28.617 [2024-12-04 14:17:29.923991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.071 ms 00:16:28.617 [2024-12-04 14:17:29.924000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.947630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.617 [2024-12-04 14:17:29.947658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:28.617 [2024-12-04 14:17:29.947670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.595 ms 00:16:28.617 [2024-12-04 14:17:29.947676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.970669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.617 [2024-12-04 14:17:29.970774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:28.617 [2024-12-04 14:17:29.970792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.959 ms 00:16:28.617 [2024-12-04 14:17:29.970799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.993814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.617 [2024-12-04 14:17:29.993914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:28.617 [2024-12-04 14:17:29.993931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.950 ms 00:16:28.617 [2024-12-04 14:17:29.993938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.617 [2024-12-04 14:17:29.993969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:28.617 [2024-12-04 14:17:29.993984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.993996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:28.617 [2024-12-04 14:17:29.994579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:28.618 [2024-12-04 14:17:29.994858] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:28.618 [2024-12-04 14:17:29.994867] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf 00:16:28.618 [2024-12-04 14:17:29.994875] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:28.618 [2024-12-04 14:17:29.994883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:28.618 [2024-12-04 14:17:29.994889] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:28.618 [2024-12-04 14:17:29.994898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:28.618 [2024-12-04 14:17:29.994905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:28.618 [2024-12-04 14:17:29.994914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:28.618 [2024-12-04 14:17:29.994921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:28.618 [2024-12-04 14:17:29.994928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:28.618 [2024-12-04 14:17:29.994935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:28.618 [2024-12-04 14:17:29.994945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.618 [2024-12-04 14:17:29.994952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:28.618 [2024-12-04 14:17:29.994963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:16:28.618 [2024-12-04 14:17:29.994970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.618 [2024-12-04 14:17:30.007611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.618 [2024-12-04 14:17:30.007636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:28.618 [2024-12-04 14:17:30.007647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.610 ms 00:16:28.618 [2024-12-04 14:17:30.007654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.618 [2024-12-04 14:17:30.007849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.618 [2024-12-04 14:17:30.007860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:28.618 [2024-12-04 14:17:30.007869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:28.618 [2024-12-04 14:17:30.007875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.618 [2024-12-04 14:17:30.052839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.618 [2024-12-04 14:17:30.052880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:28.618 [2024-12-04 14:17:30.052894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.618 [2024-12-04 14:17:30.052902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.618 [2024-12-04 14:17:30.052962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.618 [2024-12-04 14:17:30.052972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:28.618 [2024-12-04 14:17:30.052981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.618 [2024-12-04 14:17:30.052988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.618 [2024-12-04 14:17:30.053058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.618 [2024-12-04 14:17:30.053068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:28.618 [2024-12-04 14:17:30.053077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.618 [2024-12-04 14:17:30.053101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.618 [2024-12-04 14:17:30.053121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.618 [2024-12-04 14:17:30.053128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:28.618 [2024-12-04 14:17:30.053139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.618 [2024-12-04 14:17:30.053146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.879 [2024-12-04 14:17:30.127827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.879 [2024-12-04 14:17:30.127872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:28.879 [2024-12-04 14:17:30.127887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.879 [2024-12-04 14:17:30.127894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.879 [2024-12-04 14:17:30.156793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.879 [2024-12-04 14:17:30.156828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:28.879 [2024-12-04 14:17:30.156840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.879 [2024-12-04 14:17:30.156848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.879 [2024-12-04 14:17:30.156906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.879 [2024-12-04 14:17:30.156915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:28.879 [2024-12-04 14:17:30.156924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.879 [2024-12-04 14:17:30.156932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.879 [2024-12-04 14:17:30.156978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.879 [2024-12-04 14:17:30.156987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:28.879 [2024-12-04 14:17:30.156997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.879 [2024-12-04 14:17:30.157005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.879 [2024-12-04 14:17:30.157113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.880 [2024-12-04 14:17:30.157124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:28.880 [2024-12-04 14:17:30.157134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.880 [2024-12-04 14:17:30.157141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.880 [2024-12-04 14:17:30.157174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.880 [2024-12-04 14:17:30.157183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:28.880 [2024-12-04 14:17:30.157192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.880 [2024-12-04 14:17:30.157199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.880 [2024-12-04 14:17:30.157238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.880 [2024-12-04 14:17:30.157246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:28.880 [2024-12-04 14:17:30.157255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.880 [2024-12-04 14:17:30.157263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.880 [2024-12-04 14:17:30.157306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:28.880 [2024-12-04 14:17:30.157315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:28.880 [2024-12-04 14:17:30.157325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:28.880 [2024-12-04 14:17:30.157334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.880 [2024-12-04 14:17:30.157454] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.650 ms, result 0 00:16:28.880 true 00:16:28.880 14:17:30 -- ftl/restore.sh@66 -- # killprocess 72518 00:16:28.880 14:17:30 -- common/autotest_common.sh@936 -- # '[' -z 72518 ']' 00:16:28.880 14:17:30 -- common/autotest_common.sh@940 -- # kill -0 72518 00:16:28.880 14:17:30 -- common/autotest_common.sh@941 -- # uname 00:16:28.880 14:17:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.880 14:17:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72518 00:16:28.880 killing process with pid 72518 00:16:28.880 14:17:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:28.880 14:17:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:28.880 14:17:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72518' 00:16:28.880 14:17:30 -- common/autotest_common.sh@955 -- # kill 72518 00:16:28.880 14:17:30 -- common/autotest_common.sh@960 -- # wait 72518 00:16:34.166 14:17:35 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:16:38.369 262144+0 records in 00:16:38.369 262144+0 records out 00:16:38.369 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00679 s, 268 MB/s 00:16:38.369 14:17:39 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:16:40.273 14:17:41 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.273 [2024-12-04 14:17:41.292135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:40.273 [2024-12-04 14:17:41.292379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72778 ] 00:16:40.273 [2024-12-04 14:17:41.442271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.273 [2024-12-04 14:17:41.616444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.534 [2024-12-04 14:17:41.864700] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:40.534 [2024-12-04 14:17:41.864757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:40.795 [2024-12-04 14:17:42.019140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.795 [2024-12-04 14:17:42.019330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:40.795 [2024-12-04 14:17:42.019351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:40.795 [2024-12-04 14:17:42.019363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.019417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.019427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.796 [2024-12-04 14:17:42.019436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:40.796 [2024-12-04 14:17:42.019443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.019462] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:40.796 [2024-12-04 14:17:42.020193] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:40.796 [2024-12-04 14:17:42.020209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.020217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.796 [2024-12-04 14:17:42.020226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:16:40.796 [2024-12-04 14:17:42.020233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.021268] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:40.796 [2024-12-04 14:17:42.033468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.033499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:40.796 [2024-12-04 14:17:42.033511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.201 ms 00:16:40.796 [2024-12-04 14:17:42.033518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.033567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.033576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:40.796 [2024-12-04 14:17:42.033584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:40.796 [2024-12-04 14:17:42.033591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.038168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.038194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.796 [2024-12-04 14:17:42.038203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.523 ms 00:16:40.796 [2024-12-04 14:17:42.038209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.038283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.038292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.796 [2024-12-04 14:17:42.038300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:40.796 [2024-12-04 14:17:42.038307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.038349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.038358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:40.796 [2024-12-04 14:17:42.038365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:40.796 [2024-12-04 14:17:42.038372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.038398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:40.796 [2024-12-04 14:17:42.043043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.043102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.796 [2024-12-04 14:17:42.043117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:16:40.796 [2024-12-04 14:17:42.043133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.043175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.043186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:40.796 [2024-12-04 14:17:42.043196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:40.796 [2024-12-04 14:17:42.043209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.043247] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:40.796 [2024-12-04 14:17:42.043272] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:40.796 [2024-12-04 14:17:42.043312] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:40.796 [2024-12-04 14:17:42.043331] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:40.796 [2024-12-04 14:17:42.043412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:40.796 [2024-12-04 14:17:42.043423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:40.796 [2024-12-04 14:17:42.043435] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:40.796 [2024-12-04 14:17:42.043445] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043453] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043461] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:40.796 [2024-12-04 14:17:42.043467] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:40.796 [2024-12-04 14:17:42.043474] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:40.796 [2024-12-04 14:17:42.043481] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:40.796 [2024-12-04 14:17:42.043488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.043496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:40.796 [2024-12-04 14:17:42.043503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:16:40.796 [2024-12-04 14:17:42.043510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.043571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.796 [2024-12-04 14:17:42.043579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:40.796 [2024-12-04 14:17:42.043586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:40.796 [2024-12-04 14:17:42.043593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.796 [2024-12-04 14:17:42.043669] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:40.796 [2024-12-04 14:17:42.043678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:40.796 [2024-12-04 14:17:42.043687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:40.796 [2024-12-04 14:17:42.043707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:40.796 [2024-12-04 14:17:42.043727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:40.796 [2024-12-04 14:17:42.043740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:40.796 [2024-12-04 14:17:42.043746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:40.796 [2024-12-04 14:17:42.043755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:40.796 [2024-12-04 14:17:42.043761] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:40.796 [2024-12-04 14:17:42.043767] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:40.796 [2024-12-04 14:17:42.043774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:40.796 [2024-12-04 14:17:42.043792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:40.796 [2024-12-04 14:17:42.043798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:40.796 [2024-12-04 14:17:42.043812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:40.796 [2024-12-04 14:17:42.043818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:40.796 [2024-12-04 14:17:42.043831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:40.796 [2024-12-04 14:17:42.043850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:40.796 [2024-12-04 14:17:42.043869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:40.796 [2024-12-04 14:17:42.043887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:40.796 [2024-12-04 14:17:42.043905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:40.796 [2024-12-04 14:17:42.043917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:40.796 [2024-12-04 14:17:42.043924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:40.796 [2024-12-04 14:17:42.043930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:40.796 [2024-12-04 14:17:42.043936] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:40.796 [2024-12-04 14:17:42.043945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:40.796 [2024-12-04 14:17:42.043952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:40.796 [2024-12-04 14:17:42.043959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:40.796 [2024-12-04 14:17:42.043967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:40.797 [2024-12-04 14:17:42.043973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:40.797 [2024-12-04 14:17:42.043980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:40.797 [2024-12-04 14:17:42.043986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:40.797 [2024-12-04 14:17:42.043992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:40.797 [2024-12-04 14:17:42.043999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:40.797 [2024-12-04 14:17:42.044007] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:40.797 [2024-12-04 14:17:42.044015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:40.797 [2024-12-04 14:17:42.044023] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:40.797 [2024-12-04 14:17:42.044030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:40.797 [2024-12-04 14:17:42.044038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:40.797 [2024-12-04 14:17:42.044045] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:40.797 [2024-12-04 14:17:42.044052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:40.797 [2024-12-04 14:17:42.044059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:40.797 [2024-12-04 14:17:42.044065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:40.797 [2024-12-04 14:17:42.044073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:40.797 [2024-12-04 14:17:42.044079] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:40.797 [2024-12-04 14:17:42.044099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:40.797 [2024-12-04 14:17:42.044106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:40.797 [2024-12-04 14:17:42.044113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:40.797 [2024-12-04 14:17:42.044120] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:40.797 [2024-12-04 14:17:42.044127] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:40.797 [2024-12-04 14:17:42.044135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:40.797 [2024-12-04 14:17:42.044142] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:40.797 [2024-12-04 14:17:42.044149] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:40.797 [2024-12-04 14:17:42.044156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:40.797 [2024-12-04 14:17:42.044163] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:40.797 [2024-12-04 14:17:42.044170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.044177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:40.797 [2024-12-04 14:17:42.044187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:16:40.797 [2024-12-04 14:17:42.044194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.058584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.058616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.797 [2024-12-04 14:17:42.058625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.349 ms 00:16:40.797 [2024-12-04 14:17:42.058636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.058717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.058724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:40.797 [2024-12-04 14:17:42.058732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:40.797 [2024-12-04 14:17:42.058739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.102998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.103039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.797 [2024-12-04 14:17:42.103051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.217 ms 00:16:40.797 [2024-12-04 14:17:42.103059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.103118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.103129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.797 [2024-12-04 14:17:42.103137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:40.797 [2024-12-04 14:17:42.103145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.103487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.103503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.797 [2024-12-04 14:17:42.103512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:16:40.797 [2024-12-04 14:17:42.103522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.103631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.103654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.797 [2024-12-04 14:17:42.103662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:16:40.797 [2024-12-04 14:17:42.103670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.117346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.117377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.797 [2024-12-04 14:17:42.117387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.656 ms 00:16:40.797 [2024-12-04 14:17:42.117394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.130072] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:40.797 [2024-12-04 14:17:42.130127] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:40.797 [2024-12-04 14:17:42.130138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.130146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:40.797 [2024-12-04 14:17:42.130155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:16:40.797 [2024-12-04 14:17:42.130162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.154540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.154572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:40.797 [2024-12-04 14:17:42.154583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.342 ms 00:16:40.797 [2024-12-04 14:17:42.154591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.166292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.166409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:40.797 [2024-12-04 14:17:42.166424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.665 ms 00:16:40.797 [2024-12-04 14:17:42.166431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.178102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.178144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:40.797 [2024-12-04 14:17:42.178160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.643 ms 00:16:40.797 [2024-12-04 14:17:42.178167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.178525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.178540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:40.797 [2024-12-04 14:17:42.178548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:40.797 [2024-12-04 14:17:42.178555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.237982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.238032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:40.797 [2024-12-04 14:17:42.238045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.408 ms 00:16:40.797 [2024-12-04 14:17:42.238053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.248932] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:40.797 [2024-12-04 14:17:42.251467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.251498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:40.797 [2024-12-04 14:17:42.251510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.331 ms 00:16:40.797 [2024-12-04 14:17:42.251517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.251591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.251601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:40.797 [2024-12-04 14:17:42.251610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:40.797 [2024-12-04 14:17:42.251617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.251675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.251684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:40.797 [2024-12-04 14:17:42.251692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:40.797 [2024-12-04 14:17:42.251699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.252893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.252987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:40.797 [2024-12-04 14:17:42.253036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:16:40.797 [2024-12-04 14:17:42.253058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.797 [2024-12-04 14:17:42.253115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.797 [2024-12-04 14:17:42.253139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:40.797 [2024-12-04 14:17:42.253158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:40.798 [2024-12-04 14:17:42.253220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.798 [2024-12-04 14:17:42.253267] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:40.798 [2024-12-04 14:17:42.253290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.798 [2024-12-04 14:17:42.253308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:40.798 [2024-12-04 14:17:42.253330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:40.798 [2024-12-04 14:17:42.253373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.059 [2024-12-04 14:17:42.276731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.059 [2024-12-04 14:17:42.276841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:41.059 [2024-12-04 14:17:42.276889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:16:41.059 [2024-12-04 14:17:42.276910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.059 [2024-12-04 14:17:42.276980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.059 [2024-12-04 14:17:42.277009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:41.059 [2024-12-04 14:17:42.277028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:41.059 [2024-12-04 14:17:42.277036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.059 [2024-12-04 14:17:42.277909] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.360 ms, result 0 00:16:42.001  [2024-12-04T14:17:44.400Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-04T14:17:45.341Z] Copying: 55/1024 [MB] (38 MBps) [2024-12-04T14:17:46.720Z] Copying: 87/1024 [MB] (31 MBps) [2024-12-04T14:17:47.318Z] Copying: 108/1024 [MB] (20 MBps) [2024-12-04T14:17:48.704Z] Copying: 130/1024 [MB] (22 MBps) [2024-12-04T14:17:49.645Z] Copying: 153/1024 [MB] (22 MBps) [2024-12-04T14:17:50.584Z] Copying: 175/1024 [MB] (22 MBps) [2024-12-04T14:17:51.527Z] Copying: 198/1024 [MB] (22 MBps) [2024-12-04T14:17:52.472Z] Copying: 219/1024 [MB] (21 MBps) [2024-12-04T14:17:53.415Z] Copying: 239/1024 [MB] (19 MBps) [2024-12-04T14:17:54.357Z] Copying: 261/1024 [MB] (22 MBps) [2024-12-04T14:17:55.312Z] Copying: 277/1024 [MB] (16 MBps) [2024-12-04T14:17:56.690Z] Copying: 304/1024 [MB] (26 MBps) [2024-12-04T14:17:57.634Z] Copying: 358/1024 [MB] (53 MBps) [2024-12-04T14:17:58.578Z] Copying: 386/1024 [MB] (28 MBps) [2024-12-04T14:17:59.541Z] Copying: 404/1024 [MB] (17 MBps) [2024-12-04T14:18:00.487Z] Copying: 424/1024 [MB] (20 MBps) [2024-12-04T14:18:01.439Z] Copying: 436/1024 [MB] (11 MBps) [2024-12-04T14:18:02.374Z] Copying: 447/1024 [MB] (11 MBps) [2024-12-04T14:18:03.335Z] Copying: 484/1024 [MB] (37 MBps) [2024-12-04T14:18:04.723Z] Copying: 506/1024 [MB] (21 MBps) [2024-12-04T14:18:05.294Z] Copying: 517/1024 [MB] (11 MBps) [2024-12-04T14:18:06.681Z] Copying: 528/1024 [MB] (11 MBps) [2024-12-04T14:18:07.627Z] Copying: 541/1024 [MB] (12 MBps) [2024-12-04T14:18:08.572Z] Copying: 556/1024 [MB] (15 MBps) [2024-12-04T14:18:09.517Z] Copying: 584/1024 [MB] (27 MBps) [2024-12-04T14:18:10.461Z] Copying: 598/1024 [MB] (14 MBps) [2024-12-04T14:18:11.401Z] Copying: 613/1024 [MB] (14 MBps) [2024-12-04T14:18:12.344Z] Copying: 654/1024 [MB] (41 MBps) [2024-12-04T14:18:13.732Z] Copying: 677/1024 [MB] (23 MBps) [2024-12-04T14:18:14.306Z] Copying: 701/1024 [MB] (23 MBps) [2024-12-04T14:18:15.687Z] Copying: 719/1024 [MB] (17 MBps) [2024-12-04T14:18:16.625Z] Copying: 740/1024 [MB] (21 MBps) [2024-12-04T14:18:17.616Z] Copying: 764/1024 [MB] (24 MBps) [2024-12-04T14:18:18.560Z] Copying: 790/1024 [MB] (25 MBps) [2024-12-04T14:18:19.498Z] Copying: 811/1024 [MB] (21 MBps) [2024-12-04T14:18:20.439Z] Copying: 843/1024 [MB] (31 MBps) [2024-12-04T14:18:21.384Z] Copying: 881/1024 [MB] (38 MBps) [2024-12-04T14:18:22.328Z] Copying: 899/1024 [MB] (17 MBps) [2024-12-04T14:18:23.716Z] Copying: 921/1024 [MB] (21 MBps) [2024-12-04T14:18:24.661Z] Copying: 941/1024 [MB] (20 MBps) [2024-12-04T14:18:25.604Z] Copying: 960/1024 [MB] (18 MBps) [2024-12-04T14:18:26.548Z] Copying: 979/1024 [MB] (19 MBps) [2024-12-04T14:18:27.492Z] Copying: 999/1024 [MB] (19 MBps) [2024-12-04T14:18:27.492Z] Copying: 1020/1024 [MB] (20 MBps) [2024-12-04T14:18:27.492Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-04 14:18:27.404006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.404047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:26.027 [2024-12-04 14:18:27.404060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.027 [2024-12-04 14:18:27.404068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.404105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.027 [2024-12-04 14:18:27.406743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.406771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:26.027 [2024-12-04 14:18:27.406785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.623 ms 00:17:26.027 [2024-12-04 14:18:27.406793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.409125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.409244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:26.027 [2024-12-04 14:18:27.409261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:17:26.027 [2024-12-04 14:18:27.409268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.424551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.424686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:26.027 [2024-12-04 14:18:27.424704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.265 ms 00:17:26.027 [2024-12-04 14:18:27.424718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.430799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.430825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:26.027 [2024-12-04 14:18:27.430836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.050 ms 00:17:26.027 [2024-12-04 14:18:27.430845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.455318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.455359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:26.027 [2024-12-04 14:18:27.455370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.414 ms 00:17:26.027 [2024-12-04 14:18:27.455376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.469487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.469516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:26.027 [2024-12-04 14:18:27.469527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.080 ms 00:17:26.027 [2024-12-04 14:18:27.469534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.027 [2024-12-04 14:18:27.469669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.027 [2024-12-04 14:18:27.469679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:26.027 [2024-12-04 14:18:27.469687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:26.027 [2024-12-04 14:18:27.469694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.290 [2024-12-04 14:18:27.493745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.290 [2024-12-04 14:18:27.493866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:26.291 [2024-12-04 14:18:27.493882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.037 ms 00:17:26.291 [2024-12-04 14:18:27.493889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.291 [2024-12-04 14:18:27.517532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.291 [2024-12-04 14:18:27.517561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:26.291 [2024-12-04 14:18:27.517571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.616 ms 00:17:26.291 [2024-12-04 14:18:27.517586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.291 [2024-12-04 14:18:27.540727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.291 [2024-12-04 14:18:27.540839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:26.291 [2024-12-04 14:18:27.540854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.109 ms 00:17:26.291 [2024-12-04 14:18:27.540860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.291 [2024-12-04 14:18:27.564152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.291 [2024-12-04 14:18:27.564184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:26.291 [2024-12-04 14:18:27.564195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.233 ms 00:17:26.291 [2024-12-04 14:18:27.564201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.291 [2024-12-04 14:18:27.564231] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:26.291 [2024-12-04 14:18:27.564245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:26.291 [2024-12-04 14:18:27.564802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:26.292 [2024-12-04 14:18:27.564984] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:26.292 [2024-12-04 14:18:27.564991] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf 00:17:26.292 [2024-12-04 14:18:27.564999] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:26.292 [2024-12-04 14:18:27.565005] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:26.292 [2024-12-04 14:18:27.565012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:26.292 [2024-12-04 14:18:27.565020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:26.292 [2024-12-04 14:18:27.565026] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:26.292 [2024-12-04 14:18:27.565034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:26.292 [2024-12-04 14:18:27.565045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:26.292 [2024-12-04 14:18:27.565051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:26.292 [2024-12-04 14:18:27.565062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:26.292 [2024-12-04 14:18:27.565069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.292 [2024-12-04 14:18:27.565077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:26.292 [2024-12-04 14:18:27.565101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:17:26.292 [2024-12-04 14:18:27.565110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.577604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.292 [2024-12-04 14:18:27.577630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:26.292 [2024-12-04 14:18:27.577641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:17:26.292 [2024-12-04 14:18:27.577649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.577844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.292 [2024-12-04 14:18:27.577852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:26.292 [2024-12-04 14:18:27.577864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:17:26.292 [2024-12-04 14:18:27.577871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.612838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.612869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.292 [2024-12-04 14:18:27.612878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.612885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.612935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.612943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.292 [2024-12-04 14:18:27.612954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.612961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.613015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.613024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.292 [2024-12-04 14:18:27.613031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.613038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.613051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.613058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.292 [2024-12-04 14:18:27.613065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.613074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.686757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.686790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.292 [2024-12-04 14:18:27.686799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.686806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.292 [2024-12-04 14:18:27.716430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.292 [2024-12-04 14:18:27.716510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.292 [2024-12-04 14:18:27.716570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.292 [2024-12-04 14:18:27.716682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:26.292 [2024-12-04 14:18:27.716730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.292 [2024-12-04 14:18:27.716791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.292 [2024-12-04 14:18:27.716847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.292 [2024-12-04 14:18:27.716855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.292 [2024-12-04 14:18:27.716862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.292 [2024-12-04 14:18:27.716970] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 312.950 ms, result 0 00:17:27.679 00:17:27.679 00:17:27.679 14:18:28 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:17:27.679 [2024-12-04 14:18:28.834427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:27.679 [2024-12-04 14:18:28.834536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73278 ] 00:17:27.679 [2024-12-04 14:18:28.982584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.941 [2024-12-04 14:18:29.157550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.204 [2024-12-04 14:18:29.407722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.204 [2024-12-04 14:18:29.407934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.204 [2024-12-04 14:18:29.558041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.204 [2024-12-04 14:18:29.558101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.204 [2024-12-04 14:18:29.558114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.204 [2024-12-04 14:18:29.558132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.204 [2024-12-04 14:18:29.558178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.204 [2024-12-04 14:18:29.558188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.204 [2024-12-04 14:18:29.558197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:28.204 [2024-12-04 14:18:29.558204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.204 [2024-12-04 14:18:29.558222] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.204 [2024-12-04 14:18:29.558929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.204 [2024-12-04 14:18:29.558954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.204 [2024-12-04 14:18:29.558962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.204 [2024-12-04 14:18:29.558970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:17:28.204 [2024-12-04 14:18:29.558977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.204 [2024-12-04 14:18:29.559998] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:28.204 [2024-12-04 14:18:29.572743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.204 [2024-12-04 14:18:29.572776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:28.204 [2024-12-04 14:18:29.572787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.745 ms 00:17:28.204 [2024-12-04 14:18:29.572794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.204 [2024-12-04 14:18:29.572844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.204 [2024-12-04 14:18:29.572853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:28.204 [2024-12-04 14:18:29.572860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:28.204 [2024-12-04 14:18:29.572867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.204 [2024-12-04 14:18:29.577705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.204 [2024-12-04 14:18:29.577733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.204 [2024-12-04 14:18:29.577742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.783 ms 00:17:28.204 [2024-12-04 14:18:29.577749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.577823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.205 [2024-12-04 14:18:29.577831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.205 [2024-12-04 14:18:29.577838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:28.205 [2024-12-04 14:18:29.577845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.577887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.205 [2024-12-04 14:18:29.577896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.205 [2024-12-04 14:18:29.577904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:28.205 [2024-12-04 14:18:29.577911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.577937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.205 [2024-12-04 14:18:29.581429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.205 [2024-12-04 14:18:29.581454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.205 [2024-12-04 14:18:29.581463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:17:28.205 [2024-12-04 14:18:29.581469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.581498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.205 [2024-12-04 14:18:29.581506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.205 [2024-12-04 14:18:29.581513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:28.205 [2024-12-04 14:18:29.581522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.581540] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:28.205 [2024-12-04 14:18:29.581557] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:28.205 [2024-12-04 14:18:29.581587] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:28.205 [2024-12-04 14:18:29.581602] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:28.205 [2024-12-04 14:18:29.581672] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:28.205 [2024-12-04 14:18:29.581682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.205 [2024-12-04 14:18:29.581693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:28.205 [2024-12-04 14:18:29.581703] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.205 [2024-12-04 14:18:29.581712] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.205 [2024-12-04 14:18:29.581720] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:28.205 [2024-12-04 14:18:29.581727] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.205 [2024-12-04 14:18:29.581734] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:28.205 [2024-12-04 14:18:29.581741] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:28.205 [2024-12-04 14:18:29.581749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.205 [2024-12-04 14:18:29.581756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.205 [2024-12-04 14:18:29.581763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:17:28.205 [2024-12-04 14:18:29.581770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.581830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.205 [2024-12-04 14:18:29.581838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.205 [2024-12-04 14:18:29.581845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:28.205 [2024-12-04 14:18:29.581851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.205 [2024-12-04 14:18:29.581929] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.205 [2024-12-04 14:18:29.581939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.205 [2024-12-04 14:18:29.581946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.205 [2024-12-04 14:18:29.581953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.205 [2024-12-04 14:18:29.581961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.205 [2024-12-04 14:18:29.581968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.205 [2024-12-04 14:18:29.581975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:28.205 [2024-12-04 14:18:29.581982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.205 [2024-12-04 14:18:29.581989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:28.205 [2024-12-04 14:18:29.581995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.205 [2024-12-04 14:18:29.582003] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.205 [2024-12-04 14:18:29.582010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:28.205 [2024-12-04 14:18:29.582016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.205 [2024-12-04 14:18:29.582023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.205 [2024-12-04 14:18:29.582029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:28.205 [2024-12-04 14:18:29.582035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.205 [2024-12-04 14:18:29.582053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:28.205 [2024-12-04 14:18:29.582060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:28.205 [2024-12-04 14:18:29.582072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:28.205 [2024-12-04 14:18:29.582079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.205 [2024-12-04 14:18:29.582109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582129] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.205 [2024-12-04 14:18:29.582136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.205 [2024-12-04 14:18:29.582154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.205 [2024-12-04 14:18:29.582173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.205 [2024-12-04 14:18:29.582192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.205 [2024-12-04 14:18:29.582204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.205 [2024-12-04 14:18:29.582211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:28.205 [2024-12-04 14:18:29.582217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.205 [2024-12-04 14:18:29.582223] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.205 [2024-12-04 14:18:29.582232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.205 [2024-12-04 14:18:29.582240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.205 [2024-12-04 14:18:29.582255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.205 [2024-12-04 14:18:29.582262] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.205 [2024-12-04 14:18:29.582268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.205 [2024-12-04 14:18:29.582275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.205 [2024-12-04 14:18:29.582281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.205 [2024-12-04 14:18:29.582287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.205 [2024-12-04 14:18:29.582294] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.205 [2024-12-04 14:18:29.582303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.205 [2024-12-04 14:18:29.582311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:28.205 [2024-12-04 14:18:29.582318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:28.205 [2024-12-04 14:18:29.582325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:28.205 [2024-12-04 14:18:29.582332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:28.205 [2024-12-04 14:18:29.582338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:28.205 [2024-12-04 14:18:29.582345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:28.205 [2024-12-04 14:18:29.582352] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:28.205 [2024-12-04 14:18:29.582359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:28.205 [2024-12-04 14:18:29.582366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:28.205 [2024-12-04 14:18:29.582373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:28.205 [2024-12-04 14:18:29.582380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:28.206 [2024-12-04 14:18:29.582386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:28.206 [2024-12-04 14:18:29.582394] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:28.206 [2024-12-04 14:18:29.582400] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.206 [2024-12-04 14:18:29.582408] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.206 [2024-12-04 14:18:29.582415] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.206 [2024-12-04 14:18:29.582423] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.206 [2024-12-04 14:18:29.582429] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.206 [2024-12-04 14:18:29.582438] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.206 [2024-12-04 14:18:29.582445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.582451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.206 [2024-12-04 14:18:29.582458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:17:28.206 [2024-12-04 14:18:29.582467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.597197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.597227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.206 [2024-12-04 14:18:29.597237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.690 ms 00:17:28.206 [2024-12-04 14:18:29.597248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.597329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.597338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.206 [2024-12-04 14:18:29.597345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:28.206 [2024-12-04 14:18:29.597353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.637331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.637466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.206 [2024-12-04 14:18:29.637484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.935 ms 00:17:28.206 [2024-12-04 14:18:29.637493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.637529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.637538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.206 [2024-12-04 14:18:29.637546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.206 [2024-12-04 14:18:29.637553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.637890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.637905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.206 [2024-12-04 14:18:29.637914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:28.206 [2024-12-04 14:18:29.637925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.638031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.638040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.206 [2024-12-04 14:18:29.638048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:28.206 [2024-12-04 14:18:29.638055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.651648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.651677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.206 [2024-12-04 14:18:29.651687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.575 ms 00:17:28.206 [2024-12-04 14:18:29.651694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.206 [2024-12-04 14:18:29.664703] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:28.206 [2024-12-04 14:18:29.664820] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:28.206 [2024-12-04 14:18:29.664835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.206 [2024-12-04 14:18:29.664842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:28.206 [2024-12-04 14:18:29.664850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:17:28.206 [2024-12-04 14:18:29.664857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.467 [2024-12-04 14:18:29.689377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.467 [2024-12-04 14:18:29.689409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:28.467 [2024-12-04 14:18:29.689419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.478 ms 00:17:28.467 [2024-12-04 14:18:29.689427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.467 [2024-12-04 14:18:29.701496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.467 [2024-12-04 14:18:29.701524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:28.467 [2024-12-04 14:18:29.701533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.032 ms 00:17:28.467 [2024-12-04 14:18:29.701540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.467 [2024-12-04 14:18:29.713368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.467 [2024-12-04 14:18:29.713402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:28.467 [2024-12-04 14:18:29.713412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.797 ms 00:17:28.467 [2024-12-04 14:18:29.713419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.467 [2024-12-04 14:18:29.713764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.467 [2024-12-04 14:18:29.713775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.467 [2024-12-04 14:18:29.713784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:28.467 [2024-12-04 14:18:29.713791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.467 [2024-12-04 14:18:29.771618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.467 [2024-12-04 14:18:29.771766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.467 [2024-12-04 14:18:29.771784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.812 ms 00:17:28.467 [2024-12-04 14:18:29.771792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.467 [2024-12-04 14:18:29.782407] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:28.467 [2024-12-04 14:18:29.784612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.467 [2024-12-04 14:18:29.784640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.467 [2024-12-04 14:18:29.784651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.787 ms 00:17:28.467 [2024-12-04 14:18:29.784661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.784716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.784726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.468 [2024-12-04 14:18:29.784734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.468 [2024-12-04 14:18:29.784741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.784797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.784807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.468 [2024-12-04 14:18:29.784814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:28.468 [2024-12-04 14:18:29.784821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.785952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.786111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:28.468 [2024-12-04 14:18:29.786137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:17:28.468 [2024-12-04 14:18:29.786144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.786172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.786180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.468 [2024-12-04 14:18:29.786194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.468 [2024-12-04 14:18:29.786201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.786229] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.468 [2024-12-04 14:18:29.786238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.786248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.468 [2024-12-04 14:18:29.786255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:28.468 [2024-12-04 14:18:29.786261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.809935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.810043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.468 [2024-12-04 14:18:29.810058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.657 ms 00:17:28.468 [2024-12-04 14:18:29.810066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.810149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.468 [2024-12-04 14:18:29.810159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.468 [2024-12-04 14:18:29.810167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:28.468 [2024-12-04 14:18:29.810174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.468 [2024-12-04 14:18:29.811101] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 252.641 ms, result 0 00:17:29.873  [2024-12-04T14:18:32.290Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-04T14:18:33.235Z] Copying: 32/1024 [MB] (12 MBps) [2024-12-04T14:18:34.180Z] Copying: 45/1024 [MB] (12 MBps) [2024-12-04T14:18:35.123Z] Copying: 61/1024 [MB] (16 MBps) [2024-12-04T14:18:36.067Z] Copying: 73/1024 [MB] (12 MBps) [2024-12-04T14:18:37.008Z] Copying: 94/1024 [MB] (21 MBps) [2024-12-04T14:18:38.390Z] Copying: 120/1024 [MB] (25 MBps) [2024-12-04T14:18:39.331Z] Copying: 140/1024 [MB] (20 MBps) [2024-12-04T14:18:40.274Z] Copying: 164/1024 [MB] (24 MBps) [2024-12-04T14:18:41.216Z] Copying: 177/1024 [MB] (12 MBps) [2024-12-04T14:18:42.162Z] Copying: 190/1024 [MB] (12 MBps) [2024-12-04T14:18:43.107Z] Copying: 203/1024 [MB] (12 MBps) [2024-12-04T14:18:44.053Z] Copying: 216/1024 [MB] (13 MBps) [2024-12-04T14:18:44.997Z] Copying: 228/1024 [MB] (12 MBps) [2024-12-04T14:18:46.391Z] Copying: 240/1024 [MB] (12 MBps) [2024-12-04T14:18:47.035Z] Copying: 252/1024 [MB] (12 MBps) [2024-12-04T14:18:48.418Z] Copying: 263/1024 [MB] (11 MBps) [2024-12-04T14:18:48.990Z] Copying: 300/1024 [MB] (36 MBps) [2024-12-04T14:18:50.379Z] Copying: 318/1024 [MB] (17 MBps) [2024-12-04T14:18:51.324Z] Copying: 340/1024 [MB] (21 MBps) [2024-12-04T14:18:52.269Z] Copying: 361/1024 [MB] (21 MBps) [2024-12-04T14:18:53.211Z] Copying: 382/1024 [MB] (20 MBps) [2024-12-04T14:18:54.156Z] Copying: 403/1024 [MB] (21 MBps) [2024-12-04T14:18:55.101Z] Copying: 417/1024 [MB] (14 MBps) [2024-12-04T14:18:56.043Z] Copying: 434/1024 [MB] (16 MBps) [2024-12-04T14:18:56.988Z] Copying: 455/1024 [MB] (21 MBps) [2024-12-04T14:18:58.373Z] Copying: 475/1024 [MB] (19 MBps) [2024-12-04T14:18:59.318Z] Copying: 487/1024 [MB] (12 MBps) [2024-12-04T14:19:00.260Z] Copying: 503/1024 [MB] (16 MBps) [2024-12-04T14:19:01.202Z] Copying: 516/1024 [MB] (12 MBps) [2024-12-04T14:19:02.164Z] Copying: 526/1024 [MB] (10 MBps) [2024-12-04T14:19:03.104Z] Copying: 546/1024 [MB] (19 MBps) [2024-12-04T14:19:04.044Z] Copying: 567/1024 [MB] (20 MBps) [2024-12-04T14:19:04.990Z] Copying: 578/1024 [MB] (11 MBps) [2024-12-04T14:19:06.375Z] Copying: 590/1024 [MB] (11 MBps) [2024-12-04T14:19:07.314Z] Copying: 604/1024 [MB] (13 MBps) [2024-12-04T14:19:08.251Z] Copying: 630/1024 [MB] (26 MBps) [2024-12-04T14:19:09.196Z] Copying: 651/1024 [MB] (20 MBps) [2024-12-04T14:19:10.140Z] Copying: 671/1024 [MB] (20 MBps) [2024-12-04T14:19:11.082Z] Copying: 683/1024 [MB] (12 MBps) [2024-12-04T14:19:12.026Z] Copying: 706/1024 [MB] (23 MBps) [2024-12-04T14:19:13.414Z] Copying: 727/1024 [MB] (20 MBps) [2024-12-04T14:19:13.986Z] Copying: 750/1024 [MB] (23 MBps) [2024-12-04T14:19:15.371Z] Copying: 766/1024 [MB] (15 MBps) [2024-12-04T14:19:16.312Z] Copying: 777/1024 [MB] (11 MBps) [2024-12-04T14:19:17.256Z] Copying: 794/1024 [MB] (16 MBps) [2024-12-04T14:19:18.289Z] Copying: 805/1024 [MB] (11 MBps) [2024-12-04T14:19:19.232Z] Copying: 816/1024 [MB] (11 MBps) [2024-12-04T14:19:20.178Z] Copying: 828/1024 [MB] (11 MBps) [2024-12-04T14:19:21.124Z] Copying: 839/1024 [MB] (11 MBps) [2024-12-04T14:19:22.067Z] Copying: 849/1024 [MB] (10 MBps) [2024-12-04T14:19:23.010Z] Copying: 860/1024 [MB] (10 MBps) [2024-12-04T14:19:24.398Z] Copying: 871/1024 [MB] (11 MBps) [2024-12-04T14:19:25.343Z] Copying: 882/1024 [MB] (11 MBps) [2024-12-04T14:19:26.288Z] Copying: 894/1024 [MB] (11 MBps) [2024-12-04T14:19:27.232Z] Copying: 906/1024 [MB] (11 MBps) [2024-12-04T14:19:28.177Z] Copying: 918/1024 [MB] (11 MBps) [2024-12-04T14:19:29.123Z] Copying: 929/1024 [MB] (11 MBps) [2024-12-04T14:19:30.067Z] Copying: 941/1024 [MB] (11 MBps) [2024-12-04T14:19:31.012Z] Copying: 952/1024 [MB] (11 MBps) [2024-12-04T14:19:32.457Z] Copying: 963/1024 [MB] (10 MBps) [2024-12-04T14:19:33.037Z] Copying: 974/1024 [MB] (11 MBps) [2024-12-04T14:19:34.424Z] Copying: 986/1024 [MB] (11 MBps) [2024-12-04T14:19:34.997Z] Copying: 1003/1024 [MB] (16 MBps) [2024-12-04T14:19:35.567Z] Copying: 1017/1024 [MB] (14 MBps) [2024-12-04T14:19:35.828Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-04 14:19:35.779210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.363 [2024-12-04 14:19:35.779296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.363 [2024-12-04 14:19:35.779318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:34.363 [2024-12-04 14:19:35.779332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.363 [2024-12-04 14:19:35.779368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.363 [2024-12-04 14:19:35.783984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.363 [2024-12-04 14:19:35.784040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.363 [2024-12-04 14:19:35.784056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:18:34.363 [2024-12-04 14:19:35.784069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.363 [2024-12-04 14:19:35.784515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.364 [2024-12-04 14:19:35.784552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.364 [2024-12-04 14:19:35.784567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:18:34.364 [2024-12-04 14:19:35.784580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.364 [2024-12-04 14:19:35.792633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.364 [2024-12-04 14:19:35.792779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.364 [2024-12-04 14:19:35.792802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.029 ms 00:18:34.364 [2024-12-04 14:19:35.792809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.364 [2024-12-04 14:19:35.799904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.364 [2024-12-04 14:19:35.799935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:34.364 [2024-12-04 14:19:35.799944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.071 ms 00:18:34.364 [2024-12-04 14:19:35.799951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.364 [2024-12-04 14:19:35.824435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.364 [2024-12-04 14:19:35.824467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.364 [2024-12-04 14:19:35.824478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.413 ms 00:18:34.364 [2024-12-04 14:19:35.824485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.838745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.627 [2024-12-04 14:19:35.838777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.627 [2024-12-04 14:19:35.838788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.227 ms 00:18:34.627 [2024-12-04 14:19:35.838800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.838934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.627 [2024-12-04 14:19:35.838945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.627 [2024-12-04 14:19:35.838953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:34.627 [2024-12-04 14:19:35.838960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.863062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.627 [2024-12-04 14:19:35.863105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:34.627 [2024-12-04 14:19:35.863129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.088 ms 00:18:34.627 [2024-12-04 14:19:35.863141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.886867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.627 [2024-12-04 14:19:35.886990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:34.627 [2024-12-04 14:19:35.887013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.689 ms 00:18:34.627 [2024-12-04 14:19:35.887020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.910251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.627 [2024-12-04 14:19:35.910362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.627 [2024-12-04 14:19:35.910415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:18:34.627 [2024-12-04 14:19:35.910436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.933583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.627 [2024-12-04 14:19:35.933686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.627 [2024-12-04 14:19:35.933734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.032 ms 00:18:34.627 [2024-12-04 14:19:35.933754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.627 [2024-12-04 14:19:35.933791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.627 [2024-12-04 14:19:35.933822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.933854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.933882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.933910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.933966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.934937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.935992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.627 [2024-12-04 14:19:35.936776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.936804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.936933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.936963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.936990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.628 [2024-12-04 14:19:35.937785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.628 [2024-12-04 14:19:35.937793] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf 00:18:34.628 [2024-12-04 14:19:35.937801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.628 [2024-12-04 14:19:35.937808] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.628 [2024-12-04 14:19:35.937814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.628 [2024-12-04 14:19:35.937822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.628 [2024-12-04 14:19:35.937828] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.628 [2024-12-04 14:19:35.937836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.628 [2024-12-04 14:19:35.937843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.628 [2024-12-04 14:19:35.937858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.628 [2024-12-04 14:19:35.937864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.628 [2024-12-04 14:19:35.937873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.628 [2024-12-04 14:19:35.937881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.628 [2024-12-04 14:19:35.937892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.083 ms 00:18:34.628 [2024-12-04 14:19:35.937899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:35.951388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.628 [2024-12-04 14:19:35.951499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.628 [2024-12-04 14:19:35.951515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.446 ms 00:18:34.628 [2024-12-04 14:19:35.951523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:35.951736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.628 [2024-12-04 14:19:35.951751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.628 [2024-12-04 14:19:35.951759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:18:34.628 [2024-12-04 14:19:35.951766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:35.987361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:35.987394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.628 [2024-12-04 14:19:35.987404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:35.987411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:35.987463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:35.987475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.628 [2024-12-04 14:19:35.987482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:35.987489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:35.987555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:35.987564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.628 [2024-12-04 14:19:35.987572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:35.987578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:35.987593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:35.987600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.628 [2024-12-04 14:19:35.987611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:35.987617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.059810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.059846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.628 [2024-12-04 14:19:36.059858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:36.059865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.088941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.088974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.628 [2024-12-04 14:19:36.088989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:36.088996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.089050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.089059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.628 [2024-12-04 14:19:36.089067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:36.089074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.089137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.089151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.628 [2024-12-04 14:19:36.089163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:36.089173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.089262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.089271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.628 [2024-12-04 14:19:36.089278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:36.089285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.089311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.089320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.628 [2024-12-04 14:19:36.089327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.628 [2024-12-04 14:19:36.089334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.628 [2024-12-04 14:19:36.089371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.628 [2024-12-04 14:19:36.089378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.628 [2024-12-04 14:19:36.089386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.629 [2024-12-04 14:19:36.089393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.629 [2024-12-04 14:19:36.089434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.629 [2024-12-04 14:19:36.089442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.629 [2024-12-04 14:19:36.089450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.629 [2024-12-04 14:19:36.089460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.629 [2024-12-04 14:19:36.089566] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 310.349 ms, result 0 00:18:35.573 00:18:35.573 00:18:35.573 14:19:36 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:38.120 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:38.120 14:19:39 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:38.120 [2024-12-04 14:19:39.117701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:38.120 [2024-12-04 14:19:39.117809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74011 ] 00:18:38.120 [2024-12-04 14:19:39.264316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.120 [2024-12-04 14:19:39.443466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.382 [2024-12-04 14:19:39.694351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.382 [2024-12-04 14:19:39.694411] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.382 [2024-12-04 14:19:39.844548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.382 [2024-12-04 14:19:39.844593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:38.382 [2024-12-04 14:19:39.844605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.382 [2024-12-04 14:19:39.844615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.382 [2024-12-04 14:19:39.844661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.382 [2024-12-04 14:19:39.844671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.382 [2024-12-04 14:19:39.844678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:38.382 [2024-12-04 14:19:39.844686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.382 [2024-12-04 14:19:39.844701] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:38.382 [2024-12-04 14:19:39.845456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:38.382 [2024-12-04 14:19:39.845478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.382 [2024-12-04 14:19:39.845486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.382 [2024-12-04 14:19:39.845494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:18:38.382 [2024-12-04 14:19:39.845501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.382 [2024-12-04 14:19:39.846553] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:38.645 [2024-12-04 14:19:39.859247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.859278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:38.646 [2024-12-04 14:19:39.859289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.697 ms 00:18:38.646 [2024-12-04 14:19:39.859296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.859346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.859354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:38.646 [2024-12-04 14:19:39.859362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:38.646 [2024-12-04 14:19:39.859369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.864261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.864399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.646 [2024-12-04 14:19:39.864414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.837 ms 00:18:38.646 [2024-12-04 14:19:39.864422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.864499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.864509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.646 [2024-12-04 14:19:39.864517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:38.646 [2024-12-04 14:19:39.864524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.864569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.864578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:38.646 [2024-12-04 14:19:39.864586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:38.646 [2024-12-04 14:19:39.864593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.864619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:38.646 [2024-12-04 14:19:39.868104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.868131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.646 [2024-12-04 14:19:39.868139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.496 ms 00:18:38.646 [2024-12-04 14:19:39.868146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.868176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.868184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:38.646 [2024-12-04 14:19:39.868192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:38.646 [2024-12-04 14:19:39.868201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.868219] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:38.646 [2024-12-04 14:19:39.868237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:38.646 [2024-12-04 14:19:39.868268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:38.646 [2024-12-04 14:19:39.868283] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:38.646 [2024-12-04 14:19:39.868354] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:38.646 [2024-12-04 14:19:39.868364] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:38.646 [2024-12-04 14:19:39.868376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:38.646 [2024-12-04 14:19:39.868385] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868394] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868401] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:38.646 [2024-12-04 14:19:39.868409] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:38.646 [2024-12-04 14:19:39.868416] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:38.646 [2024-12-04 14:19:39.868423] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:38.646 [2024-12-04 14:19:39.868430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.868437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:38.646 [2024-12-04 14:19:39.868444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:18:38.646 [2024-12-04 14:19:39.868451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.868510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.646 [2024-12-04 14:19:39.868518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:38.646 [2024-12-04 14:19:39.868525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:38.646 [2024-12-04 14:19:39.868532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.646 [2024-12-04 14:19:39.868610] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:38.646 [2024-12-04 14:19:39.868619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:38.646 [2024-12-04 14:19:39.868626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:38.646 [2024-12-04 14:19:39.868648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:38.646 [2024-12-04 14:19:39.868668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.646 [2024-12-04 14:19:39.868681] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:38.646 [2024-12-04 14:19:39.868688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:38.646 [2024-12-04 14:19:39.868694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.646 [2024-12-04 14:19:39.868700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:38.646 [2024-12-04 14:19:39.868707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:38.646 [2024-12-04 14:19:39.868714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868726] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:38.646 [2024-12-04 14:19:39.868733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:38.646 [2024-12-04 14:19:39.868739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:38.646 [2024-12-04 14:19:39.868751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:38.646 [2024-12-04 14:19:39.868758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:38.646 [2024-12-04 14:19:39.868771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:38.646 [2024-12-04 14:19:39.868789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:38.646 [2024-12-04 14:19:39.868807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:38.646 [2024-12-04 14:19:39.868826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:38.646 [2024-12-04 14:19:39.868845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.646 [2024-12-04 14:19:39.868857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:38.646 [2024-12-04 14:19:39.868863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:38.646 [2024-12-04 14:19:39.868869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.646 [2024-12-04 14:19:39.868875] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:38.646 [2024-12-04 14:19:39.868885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:38.646 [2024-12-04 14:19:39.868892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.646 [2024-12-04 14:19:39.868909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:38.646 [2024-12-04 14:19:39.868915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:38.646 [2024-12-04 14:19:39.868921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:38.646 [2024-12-04 14:19:39.868928] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:38.646 [2024-12-04 14:19:39.868934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:38.646 [2024-12-04 14:19:39.868941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:38.646 [2024-12-04 14:19:39.868948] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:38.646 [2024-12-04 14:19:39.868956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.646 [2024-12-04 14:19:39.868964] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:38.647 [2024-12-04 14:19:39.868972] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:38.647 [2024-12-04 14:19:39.868979] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:38.647 [2024-12-04 14:19:39.868985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:38.647 [2024-12-04 14:19:39.868992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:38.647 [2024-12-04 14:19:39.868998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:38.647 [2024-12-04 14:19:39.869005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:38.647 [2024-12-04 14:19:39.869012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:38.647 [2024-12-04 14:19:39.869018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:38.647 [2024-12-04 14:19:39.869025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:38.647 [2024-12-04 14:19:39.869032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:38.647 [2024-12-04 14:19:39.869039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:38.647 [2024-12-04 14:19:39.869047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:38.647 [2024-12-04 14:19:39.869053] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:38.647 [2024-12-04 14:19:39.869060] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.647 [2024-12-04 14:19:39.869068] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:38.647 [2024-12-04 14:19:39.869075] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:38.647 [2024-12-04 14:19:39.869082] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:38.647 [2024-12-04 14:19:39.869102] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:38.647 [2024-12-04 14:19:39.869109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.869117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:38.647 [2024-12-04 14:19:39.869124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:18:38.647 [2024-12-04 14:19:39.869131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.883829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.883861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.647 [2024-12-04 14:19:39.883871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.657 ms 00:18:38.647 [2024-12-04 14:19:39.883882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.883963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.883971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:38.647 [2024-12-04 14:19:39.883978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:38.647 [2024-12-04 14:19:39.883985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.924305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.924439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.647 [2024-12-04 14:19:39.924457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.277 ms 00:18:38.647 [2024-12-04 14:19:39.924465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.924504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.924513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.647 [2024-12-04 14:19:39.924522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:38.647 [2024-12-04 14:19:39.924529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.924859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.924882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.647 [2024-12-04 14:19:39.924891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:38.647 [2024-12-04 14:19:39.924902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.925011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.925019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.647 [2024-12-04 14:19:39.925027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:38.647 [2024-12-04 14:19:39.925034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.938777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.938806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.647 [2024-12-04 14:19:39.938815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.724 ms 00:18:38.647 [2024-12-04 14:19:39.938822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.951727] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:38.647 [2024-12-04 14:19:39.951771] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:38.647 [2024-12-04 14:19:39.951781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.951789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:38.647 [2024-12-04 14:19:39.951797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.874 ms 00:18:38.647 [2024-12-04 14:19:39.951805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.976202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.976234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:38.647 [2024-12-04 14:19:39.976246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.360 ms 00:18:38.647 [2024-12-04 14:19:39.976252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.988197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.988224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:38.647 [2024-12-04 14:19:39.988234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.908 ms 00:18:38.647 [2024-12-04 14:19:39.988240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:39.999941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:39.999975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:38.647 [2024-12-04 14:19:39.999984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.670 ms 00:18:38.647 [2024-12-04 14:19:39.999991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.000353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.000365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:38.647 [2024-12-04 14:19:40.000373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:38.647 [2024-12-04 14:19:40.000380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.058810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.058963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:38.647 [2024-12-04 14:19:40.058981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.414 ms 00:18:38.647 [2024-12-04 14:19:40.058989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.069705] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:38.647 [2024-12-04 14:19:40.071985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.072015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:38.647 [2024-12-04 14:19:40.072026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.961 ms 00:18:38.647 [2024-12-04 14:19:40.072039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.072114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.072126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:38.647 [2024-12-04 14:19:40.072136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:38.647 [2024-12-04 14:19:40.072144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.072206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.072216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:38.647 [2024-12-04 14:19:40.072224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:38.647 [2024-12-04 14:19:40.072232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.073363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.073391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:38.647 [2024-12-04 14:19:40.073400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:18:38.647 [2024-12-04 14:19:40.073407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.073433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.073440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:38.647 [2024-12-04 14:19:40.073452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.647 [2024-12-04 14:19:40.073459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.073489] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:38.647 [2024-12-04 14:19:40.073497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.073506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:38.647 [2024-12-04 14:19:40.073514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:38.647 [2024-12-04 14:19:40.073521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.647 [2024-12-04 14:19:40.097326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.647 [2024-12-04 14:19:40.097359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:38.647 [2024-12-04 14:19:40.097370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.789 ms 00:18:38.648 [2024-12-04 14:19:40.097377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.648 [2024-12-04 14:19:40.097445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.648 [2024-12-04 14:19:40.097454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:38.648 [2024-12-04 14:19:40.097462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:38.648 [2024-12-04 14:19:40.097469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.648 [2024-12-04 14:19:40.098345] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 253.371 ms, result 0 00:18:40.028  [2024-12-04T14:19:42.429Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-04T14:19:43.367Z] Copying: 25/1024 [MB] (14 MBps) [2024-12-04T14:19:44.305Z] Copying: 46/1024 [MB] (21 MBps) [2024-12-04T14:19:45.244Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-04T14:19:46.188Z] Copying: 90/1024 [MB] (20 MBps) [2024-12-04T14:19:47.133Z] Copying: 110/1024 [MB] (20 MBps) [2024-12-04T14:19:48.521Z] Copying: 133/1024 [MB] (23 MBps) [2024-12-04T14:19:49.467Z] Copying: 152/1024 [MB] (18 MBps) [2024-12-04T14:19:50.413Z] Copying: 169/1024 [MB] (16 MBps) [2024-12-04T14:19:51.358Z] Copying: 184/1024 [MB] (15 MBps) [2024-12-04T14:19:52.305Z] Copying: 199/1024 [MB] (14 MBps) [2024-12-04T14:19:53.250Z] Copying: 210/1024 [MB] (11 MBps) [2024-12-04T14:19:54.195Z] Copying: 221/1024 [MB] (11 MBps) [2024-12-04T14:19:55.140Z] Copying: 233/1024 [MB] (11 MBps) [2024-12-04T14:19:56.522Z] Copying: 245/1024 [MB] (11 MBps) [2024-12-04T14:19:57.464Z] Copying: 256/1024 [MB] (11 MBps) [2024-12-04T14:19:58.407Z] Copying: 267/1024 [MB] (11 MBps) [2024-12-04T14:19:59.348Z] Copying: 278/1024 [MB] (11 MBps) [2024-12-04T14:20:00.291Z] Copying: 296/1024 [MB] (17 MBps) [2024-12-04T14:20:01.267Z] Copying: 317/1024 [MB] (20 MBps) [2024-12-04T14:20:02.237Z] Copying: 331/1024 [MB] (14 MBps) [2024-12-04T14:20:03.185Z] Copying: 350/1024 [MB] (18 MBps) [2024-12-04T14:20:04.131Z] Copying: 370/1024 [MB] (20 MBps) [2024-12-04T14:20:05.513Z] Copying: 387/1024 [MB] (16 MBps) [2024-12-04T14:20:06.449Z] Copying: 398/1024 [MB] (11 MBps) [2024-12-04T14:20:07.386Z] Copying: 420/1024 [MB] (21 MBps) [2024-12-04T14:20:08.327Z] Copying: 436/1024 [MB] (16 MBps) [2024-12-04T14:20:09.271Z] Copying: 453/1024 [MB] (16 MBps) [2024-12-04T14:20:10.213Z] Copying: 464/1024 [MB] (10 MBps) [2024-12-04T14:20:11.154Z] Copying: 476/1024 [MB] (11 MBps) [2024-12-04T14:20:12.535Z] Copying: 488/1024 [MB] (11 MBps) [2024-12-04T14:20:13.475Z] Copying: 499/1024 [MB] (10 MBps) [2024-12-04T14:20:14.407Z] Copying: 509/1024 [MB] (10 MBps) [2024-12-04T14:20:15.341Z] Copying: 555/1024 [MB] (45 MBps) [2024-12-04T14:20:16.303Z] Copying: 607/1024 [MB] (51 MBps) [2024-12-04T14:20:17.243Z] Copying: 629/1024 [MB] (22 MBps) [2024-12-04T14:20:18.183Z] Copying: 652/1024 [MB] (23 MBps) [2024-12-04T14:20:19.124Z] Copying: 675/1024 [MB] (22 MBps) [2024-12-04T14:20:20.513Z] Copying: 693/1024 [MB] (18 MBps) [2024-12-04T14:20:21.476Z] Copying: 713/1024 [MB] (19 MBps) [2024-12-04T14:20:22.418Z] Copying: 731/1024 [MB] (18 MBps) [2024-12-04T14:20:23.364Z] Copying: 753/1024 [MB] (21 MBps) [2024-12-04T14:20:24.309Z] Copying: 768/1024 [MB] (15 MBps) [2024-12-04T14:20:25.282Z] Copying: 786/1024 [MB] (17 MBps) [2024-12-04T14:20:26.227Z] Copying: 805/1024 [MB] (19 MBps) [2024-12-04T14:20:27.171Z] Copying: 822/1024 [MB] (16 MBps) [2024-12-04T14:20:28.125Z] Copying: 838/1024 [MB] (15 MBps) [2024-12-04T14:20:29.504Z] Copying: 882/1024 [MB] (44 MBps) [2024-12-04T14:20:30.450Z] Copying: 925/1024 [MB] (43 MBps) [2024-12-04T14:20:31.396Z] Copying: 944/1024 [MB] (18 MBps) [2024-12-04T14:20:32.341Z] Copying: 965/1024 [MB] (20 MBps) [2024-12-04T14:20:33.282Z] Copying: 978/1024 [MB] (13 MBps) [2024-12-04T14:20:34.227Z] Copying: 992/1024 [MB] (13 MBps) [2024-12-04T14:20:35.173Z] Copying: 1007/1024 [MB] (14 MBps) [2024-12-04T14:20:36.122Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-04T14:20:36.383Z] Copying: 1048280/1048576 [kB] (5368 kBps) [2024-12-04T14:20:36.383Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-04 14:20:36.371143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.918 [2024-12-04 14:20:36.371202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.918 [2024-12-04 14:20:36.371216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:34.918 [2024-12-04 14:20:36.371224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.918 [2024-12-04 14:20:36.371248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.918 [2024-12-04 14:20:36.373875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.918 [2024-12-04 14:20:36.373902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.918 [2024-12-04 14:20:36.373912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:19:34.918 [2024-12-04 14:20:36.373919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-12-04 14:20:36.385789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.179 [2024-12-04 14:20:36.385897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.179 [2024-12-04 14:20:36.385962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.741 ms 00:19:35.179 [2024-12-04 14:20:36.385974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-12-04 14:20:36.405943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.179 [2024-12-04 14:20:36.405977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.179 [2024-12-04 14:20:36.405987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.950 ms 00:19:35.179 [2024-12-04 14:20:36.405995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-12-04 14:20:36.412079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.179 [2024-12-04 14:20:36.412110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:35.179 [2024-12-04 14:20:36.412121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.059 ms 00:19:35.179 [2024-12-04 14:20:36.412134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-12-04 14:20:36.436042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.179 [2024-12-04 14:20:36.436171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.179 [2024-12-04 14:20:36.436187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.873 ms 00:19:35.179 [2024-12-04 14:20:36.436195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.179 [2024-12-04 14:20:36.449918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.179 [2024-12-04 14:20:36.449948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.179 [2024-12-04 14:20:36.449959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.696 ms 00:19:35.179 [2024-12-04 14:20:36.449968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.441 [2024-12-04 14:20:36.680152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.441 [2024-12-04 14:20:36.680189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.441 [2024-12-04 14:20:36.680200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 230.146 ms 00:19:35.441 [2024-12-04 14:20:36.680208] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.441 [2024-12-04 14:20:36.704490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.441 [2024-12-04 14:20:36.704520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:35.441 [2024-12-04 14:20:36.704530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.263 ms 00:19:35.441 [2024-12-04 14:20:36.704537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.441 [2024-12-04 14:20:36.727736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.441 [2024-12-04 14:20:36.727851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:35.441 [2024-12-04 14:20:36.727875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.168 ms 00:19:35.441 [2024-12-04 14:20:36.727882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.441 [2024-12-04 14:20:36.750738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.441 [2024-12-04 14:20:36.750844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.441 [2024-12-04 14:20:36.750858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.829 ms 00:19:35.441 [2024-12-04 14:20:36.750866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.441 [2024-12-04 14:20:36.773847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.441 [2024-12-04 14:20:36.773952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.441 [2024-12-04 14:20:36.773967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:19:35.441 [2024-12-04 14:20:36.773973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.441 [2024-12-04 14:20:36.773998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.441 [2024-12-04 14:20:36.774012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92928 / 261120 wr_cnt: 1 state: open 00:19:35.441 [2024-12-04 14:20:36.774022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.441 [2024-12-04 14:20:36.774030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.441 [2024-12-04 14:20:36.774037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.441 [2024-12-04 14:20:36.774044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.442 [2024-12-04 14:20:36.774795] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.442 [2024-12-04 14:20:36.774803] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf 00:19:35.442 [2024-12-04 14:20:36.774810] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92928 00:19:35.442 [2024-12-04 14:20:36.774818] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93888 00:19:35.442 [2024-12-04 14:20:36.774824] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92928 00:19:35.442 [2024-12-04 14:20:36.774835] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0103 00:19:35.442 [2024-12-04 14:20:36.774842] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.442 [2024-12-04 14:20:36.774850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.442 [2024-12-04 14:20:36.774857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.442 [2024-12-04 14:20:36.774868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.442 [2024-12-04 14:20:36.774875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.442 [2024-12-04 14:20:36.774882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.442 [2024-12-04 14:20:36.774889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.442 [2024-12-04 14:20:36.774897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:19:35.442 [2024-12-04 14:20:36.774903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.787430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.443 [2024-12-04 14:20:36.787461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.443 [2024-12-04 14:20:36.787471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:19:35.443 [2024-12-04 14:20:36.787478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.787672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.443 [2024-12-04 14:20:36.787681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.443 [2024-12-04 14:20:36.787688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:19:35.443 [2024-12-04 14:20:36.787695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.822543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.443 [2024-12-04 14:20:36.822571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.443 [2024-12-04 14:20:36.822581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.443 [2024-12-04 14:20:36.822589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.822639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.443 [2024-12-04 14:20:36.822646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.443 [2024-12-04 14:20:36.822654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.443 [2024-12-04 14:20:36.822660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.822715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.443 [2024-12-04 14:20:36.822728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.443 [2024-12-04 14:20:36.822736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.443 [2024-12-04 14:20:36.822743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.822757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.443 [2024-12-04 14:20:36.822764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.443 [2024-12-04 14:20:36.822771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.443 [2024-12-04 14:20:36.822778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.443 [2024-12-04 14:20:36.895978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.443 [2024-12-04 14:20:36.896014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.443 [2024-12-04 14:20:36.896023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.443 [2024-12-04 14:20:36.896031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.925552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.925587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.703 [2024-12-04 14:20:36.925597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.925605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.925656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.925665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.703 [2024-12-04 14:20:36.925678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.925685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.925723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.925731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.703 [2024-12-04 14:20:36.925739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.925746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.925830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.925839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.703 [2024-12-04 14:20:36.925847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.925856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.925883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.925891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.703 [2024-12-04 14:20:36.925898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.925905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.925937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.925945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.703 [2024-12-04 14:20:36.925952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.925961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.926002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.703 [2024-12-04 14:20:36.926010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.703 [2024-12-04 14:20:36.926018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.703 [2024-12-04 14:20:36.926025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.703 [2024-12-04 14:20:36.926156] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 555.127 ms, result 0 00:19:37.091 00:19:37.091 00:19:37.091 14:20:38 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:37.091 [2024-12-04 14:20:38.291043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:37.091 [2024-12-04 14:20:38.291171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74628 ] 00:19:37.091 [2024-12-04 14:20:38.439030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.354 [2024-12-04 14:20:38.611581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.617 [2024-12-04 14:20:38.861890] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.617 [2024-12-04 14:20:38.861943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.617 [2024-12-04 14:20:39.012082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.617 [2024-12-04 14:20:39.012134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:37.617 [2024-12-04 14:20:39.012147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.617 [2024-12-04 14:20:39.012157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.617 [2024-12-04 14:20:39.012202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.617 [2024-12-04 14:20:39.012213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.617 [2024-12-04 14:20:39.012221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:37.617 [2024-12-04 14:20:39.012229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.617 [2024-12-04 14:20:39.012248] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:37.617 [2024-12-04 14:20:39.012959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:37.617 [2024-12-04 14:20:39.012975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.617 [2024-12-04 14:20:39.012982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.617 [2024-12-04 14:20:39.012990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:19:37.617 [2024-12-04 14:20:39.012998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.617 [2024-12-04 14:20:39.014069] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:37.618 [2024-12-04 14:20:39.026552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.026592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:37.618 [2024-12-04 14:20:39.026604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.486 ms 00:19:37.618 [2024-12-04 14:20:39.026611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.026660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.026669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:37.618 [2024-12-04 14:20:39.026677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:37.618 [2024-12-04 14:20:39.026684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.031445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.031471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.618 [2024-12-04 14:20:39.031480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.706 ms 00:19:37.618 [2024-12-04 14:20:39.031487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.031563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.031572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.618 [2024-12-04 14:20:39.031579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:37.618 [2024-12-04 14:20:39.031587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.031631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.031640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.618 [2024-12-04 14:20:39.031648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:37.618 [2024-12-04 14:20:39.031655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.031682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.618 [2024-12-04 14:20:39.035201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.035226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.618 [2024-12-04 14:20:39.035235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:19:37.618 [2024-12-04 14:20:39.035242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.035272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.035279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.618 [2024-12-04 14:20:39.035287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:37.618 [2024-12-04 14:20:39.035296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.035316] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:37.618 [2024-12-04 14:20:39.035333] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:37.618 [2024-12-04 14:20:39.035364] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:37.618 [2024-12-04 14:20:39.035378] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:37.618 [2024-12-04 14:20:39.035449] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:37.618 [2024-12-04 14:20:39.035460] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.618 [2024-12-04 14:20:39.035472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:37.618 [2024-12-04 14:20:39.035482] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035491] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035499] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:37.618 [2024-12-04 14:20:39.035506] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.618 [2024-12-04 14:20:39.035514] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:37.618 [2024-12-04 14:20:39.035521] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:37.618 [2024-12-04 14:20:39.035528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.035535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.618 [2024-12-04 14:20:39.035543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:19:37.618 [2024-12-04 14:20:39.035550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.035610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.618 [2024-12-04 14:20:39.035618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.618 [2024-12-04 14:20:39.035625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:37.618 [2024-12-04 14:20:39.035632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.618 [2024-12-04 14:20:39.035711] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.618 [2024-12-04 14:20:39.035720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.618 [2024-12-04 14:20:39.035728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.618 [2024-12-04 14:20:39.035750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.618 [2024-12-04 14:20:39.035772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.618 [2024-12-04 14:20:39.035786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.618 [2024-12-04 14:20:39.035793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:37.618 [2024-12-04 14:20:39.035800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.618 [2024-12-04 14:20:39.035806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.618 [2024-12-04 14:20:39.035813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:37.618 [2024-12-04 14:20:39.035820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.618 [2024-12-04 14:20:39.035840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:37.618 [2024-12-04 14:20:39.035846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035852] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:37.618 [2024-12-04 14:20:39.035859] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:37.618 [2024-12-04 14:20:39.035866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.618 [2024-12-04 14:20:39.035879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.618 [2024-12-04 14:20:39.035900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.618 [2024-12-04 14:20:39.035919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:37.618 [2024-12-04 14:20:39.035926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:37.618 [2024-12-04 14:20:39.035932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.619 [2024-12-04 14:20:39.035938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:37.619 [2024-12-04 14:20:39.035945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:37.619 [2024-12-04 14:20:39.035952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.619 [2024-12-04 14:20:39.035958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:37.619 [2024-12-04 14:20:39.035965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.619 [2024-12-04 14:20:39.035971] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.619 [2024-12-04 14:20:39.035977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:37.619 [2024-12-04 14:20:39.035984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.619 [2024-12-04 14:20:39.035990] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.619 [2024-12-04 14:20:39.036000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.619 [2024-12-04 14:20:39.036007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.619 [2024-12-04 14:20:39.036015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.619 [2024-12-04 14:20:39.036023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.619 [2024-12-04 14:20:39.036030] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.619 [2024-12-04 14:20:39.036037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.619 [2024-12-04 14:20:39.036043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.619 [2024-12-04 14:20:39.036050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.619 [2024-12-04 14:20:39.036056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.619 [2024-12-04 14:20:39.036064] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.619 [2024-12-04 14:20:39.036073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.619 [2024-12-04 14:20:39.036081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:37.619 [2024-12-04 14:20:39.036105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:37.619 [2024-12-04 14:20:39.036113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:37.619 [2024-12-04 14:20:39.036121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:37.619 [2024-12-04 14:20:39.036128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:37.619 [2024-12-04 14:20:39.036135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:37.619 [2024-12-04 14:20:39.036143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:37.619 [2024-12-04 14:20:39.036150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:37.619 [2024-12-04 14:20:39.036157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:37.619 [2024-12-04 14:20:39.036164] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:37.619 [2024-12-04 14:20:39.036171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:37.619 [2024-12-04 14:20:39.036178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:37.619 [2024-12-04 14:20:39.036187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:37.619 [2024-12-04 14:20:39.036194] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.619 [2024-12-04 14:20:39.036203] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.619 [2024-12-04 14:20:39.036211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.619 [2024-12-04 14:20:39.036218] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.619 [2024-12-04 14:20:39.036226] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.619 [2024-12-04 14:20:39.036234] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.619 [2024-12-04 14:20:39.036241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.619 [2024-12-04 14:20:39.036249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.619 [2024-12-04 14:20:39.036256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:19:37.619 [2024-12-04 14:20:39.036264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.619 [2024-12-04 14:20:39.050817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.619 [2024-12-04 14:20:39.050848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.619 [2024-12-04 14:20:39.050858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:19:37.619 [2024-12-04 14:20:39.050870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.619 [2024-12-04 14:20:39.050951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.619 [2024-12-04 14:20:39.050959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.619 [2024-12-04 14:20:39.050967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:37.619 [2024-12-04 14:20:39.050974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.090809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.090846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.882 [2024-12-04 14:20:39.090858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.793 ms 00:19:37.882 [2024-12-04 14:20:39.090866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.090903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.090913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.882 [2024-12-04 14:20:39.090921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:37.882 [2024-12-04 14:20:39.090928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.091297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.091313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.882 [2024-12-04 14:20:39.091322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:37.882 [2024-12-04 14:20:39.091333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.091439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.091448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.882 [2024-12-04 14:20:39.091456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:37.882 [2024-12-04 14:20:39.091463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.105035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.105065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.882 [2024-12-04 14:20:39.105074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.553 ms 00:19:37.882 [2024-12-04 14:20:39.105081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.117740] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:37.882 [2024-12-04 14:20:39.117772] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:37.882 [2024-12-04 14:20:39.117782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.117790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:37.882 [2024-12-04 14:20:39.117798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.591 ms 00:19:37.882 [2024-12-04 14:20:39.117805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.142223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.142267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:37.882 [2024-12-04 14:20:39.142278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.381 ms 00:19:37.882 [2024-12-04 14:20:39.142286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.154115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.154144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:37.882 [2024-12-04 14:20:39.154153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.791 ms 00:19:37.882 [2024-12-04 14:20:39.154160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.165820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.165936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:37.882 [2024-12-04 14:20:39.165959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.625 ms 00:19:37.882 [2024-12-04 14:20:39.165967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.166331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.166344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.882 [2024-12-04 14:20:39.166352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:37.882 [2024-12-04 14:20:39.166359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.223908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.223948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:37.882 [2024-12-04 14:20:39.223961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.532 ms 00:19:37.882 [2024-12-04 14:20:39.223969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.234489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:37.882 [2024-12-04 14:20:39.236575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.236603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.882 [2024-12-04 14:20:39.236614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.567 ms 00:19:37.882 [2024-12-04 14:20:39.236626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.236681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.236692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:37.882 [2024-12-04 14:20:39.236702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:37.882 [2024-12-04 14:20:39.236710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.237643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.237672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.882 [2024-12-04 14:20:39.237682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:19:37.882 [2024-12-04 14:20:39.237690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.238830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.238855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:37.882 [2024-12-04 14:20:39.238865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:19:37.882 [2024-12-04 14:20:39.238872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.238898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.238905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.882 [2024-12-04 14:20:39.238918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.882 [2024-12-04 14:20:39.238925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.238954] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:37.882 [2024-12-04 14:20:39.238963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.238972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:37.882 [2024-12-04 14:20:39.238980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:37.882 [2024-12-04 14:20:39.238987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.262915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.263037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.882 [2024-12-04 14:20:39.263054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:19:37.882 [2024-12-04 14:20:39.263062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.263139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.882 [2024-12-04 14:20:39.263148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.882 [2024-12-04 14:20:39.263157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:37.882 [2024-12-04 14:20:39.263164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.882 [2024-12-04 14:20:39.268373] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.232 ms, result 0 00:19:39.268  [2024-12-04T14:20:41.701Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-04T14:20:42.644Z] Copying: 38/1024 [MB] (20 MBps) [2024-12-04T14:20:43.590Z] Copying: 60/1024 [MB] (21 MBps) [2024-12-04T14:20:44.533Z] Copying: 72/1024 [MB] (11 MBps) [2024-12-04T14:20:45.473Z] Copying: 85/1024 [MB] (13 MBps) [2024-12-04T14:20:46.858Z] Copying: 102/1024 [MB] (16 MBps) [2024-12-04T14:20:47.805Z] Copying: 113/1024 [MB] (11 MBps) [2024-12-04T14:20:48.747Z] Copying: 124/1024 [MB] (10 MBps) [2024-12-04T14:20:49.689Z] Copying: 135/1024 [MB] (11 MBps) [2024-12-04T14:20:50.628Z] Copying: 147/1024 [MB] (11 MBps) [2024-12-04T14:20:51.569Z] Copying: 162/1024 [MB] (15 MBps) [2024-12-04T14:20:52.511Z] Copying: 176/1024 [MB] (13 MBps) [2024-12-04T14:20:53.453Z] Copying: 187/1024 [MB] (11 MBps) [2024-12-04T14:20:54.849Z] Copying: 199/1024 [MB] (11 MBps) [2024-12-04T14:20:55.804Z] Copying: 210/1024 [MB] (11 MBps) [2024-12-04T14:20:56.747Z] Copying: 228/1024 [MB] (18 MBps) [2024-12-04T14:20:57.692Z] Copying: 240/1024 [MB] (11 MBps) [2024-12-04T14:20:58.638Z] Copying: 255/1024 [MB] (14 MBps) [2024-12-04T14:20:59.583Z] Copying: 275/1024 [MB] (20 MBps) [2024-12-04T14:21:00.528Z] Copying: 286/1024 [MB] (11 MBps) [2024-12-04T14:21:01.472Z] Copying: 309/1024 [MB] (22 MBps) [2024-12-04T14:21:02.857Z] Copying: 332/1024 [MB] (23 MBps) [2024-12-04T14:21:03.800Z] Copying: 358/1024 [MB] (25 MBps) [2024-12-04T14:21:04.746Z] Copying: 375/1024 [MB] (17 MBps) [2024-12-04T14:21:05.690Z] Copying: 393/1024 [MB] (18 MBps) [2024-12-04T14:21:06.634Z] Copying: 411/1024 [MB] (17 MBps) [2024-12-04T14:21:07.575Z] Copying: 423/1024 [MB] (12 MBps) [2024-12-04T14:21:08.581Z] Copying: 435/1024 [MB] (12 MBps) [2024-12-04T14:21:09.525Z] Copying: 447/1024 [MB] (12 MBps) [2024-12-04T14:21:10.470Z] Copying: 459/1024 [MB] (11 MBps) [2024-12-04T14:21:11.858Z] Copying: 471/1024 [MB] (11 MBps) [2024-12-04T14:21:12.802Z] Copying: 482/1024 [MB] (11 MBps) [2024-12-04T14:21:13.745Z] Copying: 494/1024 [MB] (11 MBps) [2024-12-04T14:21:14.689Z] Copying: 515/1024 [MB] (20 MBps) [2024-12-04T14:21:15.629Z] Copying: 530/1024 [MB] (15 MBps) [2024-12-04T14:21:16.571Z] Copying: 545/1024 [MB] (15 MBps) [2024-12-04T14:21:17.518Z] Copying: 560/1024 [MB] (14 MBps) [2024-12-04T14:21:18.463Z] Copying: 579/1024 [MB] (19 MBps) [2024-12-04T14:21:19.854Z] Copying: 591/1024 [MB] (12 MBps) [2024-12-04T14:21:20.800Z] Copying: 608/1024 [MB] (17 MBps) [2024-12-04T14:21:21.757Z] Copying: 628/1024 [MB] (19 MBps) [2024-12-04T14:21:22.704Z] Copying: 651/1024 [MB] (23 MBps) [2024-12-04T14:21:23.643Z] Copying: 671/1024 [MB] (20 MBps) [2024-12-04T14:21:24.584Z] Copying: 700/1024 [MB] (29 MBps) [2024-12-04T14:21:25.529Z] Copying: 720/1024 [MB] (19 MBps) [2024-12-04T14:21:26.475Z] Copying: 732/1024 [MB] (11 MBps) [2024-12-04T14:21:27.860Z] Copying: 743/1024 [MB] (10 MBps) [2024-12-04T14:21:28.800Z] Copying: 753/1024 [MB] (10 MBps) [2024-12-04T14:21:29.744Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-04T14:21:30.691Z] Copying: 780/1024 [MB] (15 MBps) [2024-12-04T14:21:31.636Z] Copying: 794/1024 [MB] (14 MBps) [2024-12-04T14:21:32.580Z] Copying: 809/1024 [MB] (14 MBps) [2024-12-04T14:21:33.525Z] Copying: 831/1024 [MB] (22 MBps) [2024-12-04T14:21:34.469Z] Copying: 857/1024 [MB] (25 MBps) [2024-12-04T14:21:35.887Z] Copying: 880/1024 [MB] (23 MBps) [2024-12-04T14:21:36.460Z] Copying: 898/1024 [MB] (17 MBps) [2024-12-04T14:21:37.847Z] Copying: 921/1024 [MB] (22 MBps) [2024-12-04T14:21:38.792Z] Copying: 940/1024 [MB] (19 MBps) [2024-12-04T14:21:39.736Z] Copying: 952/1024 [MB] (11 MBps) [2024-12-04T14:21:40.678Z] Copying: 974/1024 [MB] (22 MBps) [2024-12-04T14:21:41.623Z] Copying: 991/1024 [MB] (16 MBps) [2024-12-04T14:21:42.568Z] Copying: 1006/1024 [MB] (15 MBps) [2024-12-04T14:21:43.140Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-04T14:21:43.712Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-04 14:21:43.624687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.624772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.247 [2024-12-04 14:21:43.624804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.247 [2024-12-04 14:21:43.624813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.247 [2024-12-04 14:21:43.624840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.247 [2024-12-04 14:21:43.627890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.628106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.247 [2024-12-04 14:21:43.628130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:20:42.247 [2024-12-04 14:21:43.628139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.247 [2024-12-04 14:21:43.628403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.628414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.247 [2024-12-04 14:21:43.628428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:42.247 [2024-12-04 14:21:43.628436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.247 [2024-12-04 14:21:43.635471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.637948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.247 [2024-12-04 14:21:43.637977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.017 ms 00:20:42.247 [2024-12-04 14:21:43.637986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.247 [2024-12-04 14:21:43.644222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.644379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:42.247 [2024-12-04 14:21:43.644398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:20:42.247 [2024-12-04 14:21:43.644416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.247 [2024-12-04 14:21:43.673279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.673331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.247 [2024-12-04 14:21:43.673347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.782 ms 00:20:42.247 [2024-12-04 14:21:43.673355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.247 [2024-12-04 14:21:43.692916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.247 [2024-12-04 14:21:43.692968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.247 [2024-12-04 14:21:43.692983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.509 ms 00:20:42.247 [2024-12-04 14:21:43.692993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.820 [2024-12-04 14:21:44.065053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.820 [2024-12-04 14:21:44.065127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.820 [2024-12-04 14:21:44.065143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 371.996 ms 00:20:42.820 [2024-12-04 14:21:44.065153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.820 [2024-12-04 14:21:44.091713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.820 [2024-12-04 14:21:44.091761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:42.820 [2024-12-04 14:21:44.091774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.535 ms 00:20:42.820 [2024-12-04 14:21:44.091782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.820 [2024-12-04 14:21:44.117468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.820 [2024-12-04 14:21:44.117515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:42.820 [2024-12-04 14:21:44.117528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.638 ms 00:20:42.820 [2024-12-04 14:21:44.117547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.820 [2024-12-04 14:21:44.142654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.820 [2024-12-04 14:21:44.142700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.820 [2024-12-04 14:21:44.142714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.060 ms 00:20:42.820 [2024-12-04 14:21:44.142722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.820 [2024-12-04 14:21:44.168052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.820 [2024-12-04 14:21:44.168113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.820 [2024-12-04 14:21:44.168126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.241 ms 00:20:42.820 [2024-12-04 14:21:44.168134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.820 [2024-12-04 14:21:44.168180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.820 [2024-12-04 14:21:44.168196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:20:42.820 [2024-12-04 14:21:44.168209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.820 [2024-12-04 14:21:44.168650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.168998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.169007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.821 [2024-12-04 14:21:44.169023] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.821 [2024-12-04 14:21:44.169031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef1f7f6e-60a9-4b63-9e0f-14b993eb1acf 00:20:42.821 [2024-12-04 14:21:44.169040] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:20:42.821 [2024-12-04 14:21:44.169048] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 41920 00:20:42.821 [2024-12-04 14:21:44.169056] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 40960 00:20:42.821 [2024-12-04 14:21:44.169072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0234 00:20:42.821 [2024-12-04 14:21:44.169080] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.821 [2024-12-04 14:21:44.169099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.821 [2024-12-04 14:21:44.169107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.821 [2024-12-04 14:21:44.169114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.821 [2024-12-04 14:21:44.169130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.821 [2024-12-04 14:21:44.169138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.821 [2024-12-04 14:21:44.169147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.821 [2024-12-04 14:21:44.169156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:20:42.821 [2024-12-04 14:21:44.169165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.821 [2024-12-04 14:21:44.182854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.821 [2024-12-04 14:21:44.183032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.821 [2024-12-04 14:21:44.183051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.642 ms 00:20:42.821 [2024-12-04 14:21:44.183060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.821 [2024-12-04 14:21:44.183307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.821 [2024-12-04 14:21:44.183317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.821 [2024-12-04 14:21:44.183327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:42.821 [2024-12-04 14:21:44.183334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.821 [2024-12-04 14:21:44.222737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.821 [2024-12-04 14:21:44.222788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.821 [2024-12-04 14:21:44.222800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.821 [2024-12-04 14:21:44.222808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.821 [2024-12-04 14:21:44.222881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.821 [2024-12-04 14:21:44.222890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.821 [2024-12-04 14:21:44.222898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.821 [2024-12-04 14:21:44.222907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.821 [2024-12-04 14:21:44.222985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.821 [2024-12-04 14:21:44.223000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.821 [2024-12-04 14:21:44.223009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.821 [2024-12-04 14:21:44.223017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.821 [2024-12-04 14:21:44.223032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.821 [2024-12-04 14:21:44.223041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.821 [2024-12-04 14:21:44.223049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.821 [2024-12-04 14:21:44.223057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.304754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.304812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.084 [2024-12-04 14:21:44.304825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.304834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.084 [2024-12-04 14:21:44.337471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.084 [2024-12-04 14:21:44.337575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.084 [2024-12-04 14:21:44.337643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.084 [2024-12-04 14:21:44.337775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:43.084 [2024-12-04 14:21:44.337836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.084 [2024-12-04 14:21:44.337903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.337961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:43.084 [2024-12-04 14:21:44.337970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.084 [2024-12-04 14:21:44.337978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:43.084 [2024-12-04 14:21:44.337987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.084 [2024-12-04 14:21:44.338163] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 713.398 ms, result 0 00:20:44.028 00:20:44.028 00:20:44.028 14:21:45 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:46.014 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:46.014 14:21:47 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:46.014 14:21:47 -- ftl/restore.sh@85 -- # restore_kill 00:20:46.014 14:21:47 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:46.303 14:21:47 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:46.304 14:21:47 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:46.304 14:21:47 -- ftl/restore.sh@32 -- # killprocess 72518 00:20:46.304 14:21:47 -- common/autotest_common.sh@936 -- # '[' -z 72518 ']' 00:20:46.304 14:21:47 -- common/autotest_common.sh@940 -- # kill -0 72518 00:20:46.304 Process with pid 72518 is not found 00:20:46.304 Remove shared memory files 00:20:46.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72518) - No such process 00:20:46.304 14:21:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72518 is not found' 00:20:46.304 14:21:47 -- ftl/restore.sh@33 -- # remove_shm 00:20:46.304 14:21:47 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:46.304 14:21:47 -- ftl/common.sh@205 -- # rm -f rm -f 00:20:46.304 14:21:47 -- ftl/common.sh@206 -- # rm -f rm -f 00:20:46.304 14:21:47 -- ftl/common.sh@207 -- # rm -f rm -f 00:20:46.304 14:21:47 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:46.304 14:21:47 -- ftl/common.sh@209 -- # rm -f rm -f 00:20:46.304 ************************************ 00:20:46.304 END TEST ftl_restore 00:20:46.304 ************************************ 00:20:46.304 00:20:46.304 real 4m25.707s 00:20:46.304 user 4m14.895s 00:20:46.304 sys 0m10.890s 00:20:46.304 14:21:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:46.304 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:20:46.304 14:21:47 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:20:46.304 14:21:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:46.304 14:21:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.304 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:20:46.304 ************************************ 00:20:46.304 START TEST ftl_dirty_shutdown 00:20:46.304 ************************************ 00:20:46.304 14:21:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:20:46.304 * Looking for test storage... 00:20:46.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:46.304 14:21:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:46.304 14:21:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:46.304 14:21:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:46.304 14:21:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:46.304 14:21:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:46.304 14:21:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:46.304 14:21:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:46.304 14:21:47 -- scripts/common.sh@335 -- # IFS=.-: 00:20:46.304 14:21:47 -- scripts/common.sh@335 -- # read -ra ver1 00:20:46.304 14:21:47 -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.304 14:21:47 -- scripts/common.sh@336 -- # read -ra ver2 00:20:46.304 14:21:47 -- scripts/common.sh@337 -- # local 'op=<' 00:20:46.304 14:21:47 -- scripts/common.sh@339 -- # ver1_l=2 00:20:46.304 14:21:47 -- scripts/common.sh@340 -- # ver2_l=1 00:20:46.304 14:21:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:46.304 14:21:47 -- scripts/common.sh@343 -- # case "$op" in 00:20:46.304 14:21:47 -- scripts/common.sh@344 -- # : 1 00:20:46.304 14:21:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:46.304 14:21:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.304 14:21:47 -- scripts/common.sh@364 -- # decimal 1 00:20:46.304 14:21:47 -- scripts/common.sh@352 -- # local d=1 00:20:46.304 14:21:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.304 14:21:47 -- scripts/common.sh@354 -- # echo 1 00:20:46.304 14:21:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:46.304 14:21:47 -- scripts/common.sh@365 -- # decimal 2 00:20:46.304 14:21:47 -- scripts/common.sh@352 -- # local d=2 00:20:46.304 14:21:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.304 14:21:47 -- scripts/common.sh@354 -- # echo 2 00:20:46.304 14:21:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:46.304 14:21:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:46.304 14:21:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:46.304 14:21:47 -- scripts/common.sh@367 -- # return 0 00:20:46.304 14:21:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.304 14:21:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.304 --rc genhtml_branch_coverage=1 00:20:46.304 --rc genhtml_function_coverage=1 00:20:46.304 --rc genhtml_legend=1 00:20:46.304 --rc geninfo_all_blocks=1 00:20:46.304 --rc geninfo_unexecuted_blocks=1 00:20:46.304 00:20:46.304 ' 00:20:46.304 14:21:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.304 --rc genhtml_branch_coverage=1 00:20:46.304 --rc genhtml_function_coverage=1 00:20:46.304 --rc genhtml_legend=1 00:20:46.304 --rc geninfo_all_blocks=1 00:20:46.304 --rc geninfo_unexecuted_blocks=1 00:20:46.304 00:20:46.304 ' 00:20:46.304 14:21:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.304 --rc genhtml_branch_coverage=1 00:20:46.304 --rc genhtml_function_coverage=1 00:20:46.304 --rc genhtml_legend=1 00:20:46.304 --rc geninfo_all_blocks=1 00:20:46.304 --rc geninfo_unexecuted_blocks=1 00:20:46.304 00:20:46.304 ' 00:20:46.304 14:21:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:46.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.304 --rc genhtml_branch_coverage=1 00:20:46.304 --rc genhtml_function_coverage=1 00:20:46.304 --rc genhtml_legend=1 00:20:46.304 --rc geninfo_all_blocks=1 00:20:46.304 --rc geninfo_unexecuted_blocks=1 00:20:46.304 00:20:46.304 ' 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:46.304 14:21:47 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:46.304 14:21:47 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:46.304 14:21:47 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:46.304 14:21:47 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:46.304 14:21:47 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:46.304 14:21:47 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.304 14:21:47 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:46.304 14:21:47 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:46.304 14:21:47 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.304 14:21:47 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.304 14:21:47 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:46.304 14:21:47 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:46.304 14:21:47 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:46.304 14:21:47 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:46.304 14:21:47 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:46.304 14:21:47 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:46.304 14:21:47 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.304 14:21:47 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.304 14:21:47 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:46.304 14:21:47 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:46.304 14:21:47 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:46.304 14:21:47 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:46.304 14:21:47 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:46.304 14:21:47 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:46.304 14:21:47 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:46.304 14:21:47 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:46.304 14:21:47 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:46.304 14:21:47 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75408 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:46.304 14:21:47 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75408 00:20:46.304 14:21:47 -- common/autotest_common.sh@829 -- # '[' -z 75408 ']' 00:20:46.304 14:21:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.304 14:21:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.304 14:21:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.304 14:21:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.304 14:21:47 -- common/autotest_common.sh@10 -- # set +x 00:20:46.564 [2024-12-04 14:21:47.770566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:46.564 [2024-12-04 14:21:47.770810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75408 ] 00:20:46.564 [2024-12-04 14:21:47.918415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.824 [2024-12-04 14:21:48.095825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:46.824 [2024-12-04 14:21:48.096165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.209 14:21:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.209 14:21:49 -- common/autotest_common.sh@862 -- # return 0 00:20:48.209 14:21:49 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:20:48.209 14:21:49 -- ftl/common.sh@54 -- # local name=nvme0 00:20:48.209 14:21:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:20:48.209 14:21:49 -- ftl/common.sh@56 -- # local size=103424 00:20:48.209 14:21:49 -- ftl/common.sh@59 -- # local base_bdev 00:20:48.209 14:21:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:20:48.209 14:21:49 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:48.209 14:21:49 -- ftl/common.sh@62 -- # local base_size 00:20:48.209 14:21:49 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:48.209 14:21:49 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:20:48.209 14:21:49 -- common/autotest_common.sh@1368 -- # local bdev_info 00:20:48.209 14:21:49 -- common/autotest_common.sh@1369 -- # local bs 00:20:48.209 14:21:49 -- common/autotest_common.sh@1370 -- # local nb 00:20:48.209 14:21:49 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:48.470 14:21:49 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:20:48.470 { 00:20:48.470 "name": "nvme0n1", 00:20:48.470 "aliases": [ 00:20:48.470 "87b24811-1231-41a8-a3d6-78a1c0d23f7d" 00:20:48.470 ], 00:20:48.470 "product_name": "NVMe disk", 00:20:48.470 "block_size": 4096, 00:20:48.470 "num_blocks": 1310720, 00:20:48.470 "uuid": "87b24811-1231-41a8-a3d6-78a1c0d23f7d", 00:20:48.470 "assigned_rate_limits": { 00:20:48.470 "rw_ios_per_sec": 0, 00:20:48.470 "rw_mbytes_per_sec": 0, 00:20:48.470 "r_mbytes_per_sec": 0, 00:20:48.470 "w_mbytes_per_sec": 0 00:20:48.470 }, 00:20:48.470 "claimed": true, 00:20:48.470 "claim_type": "read_many_write_one", 00:20:48.471 "zoned": false, 00:20:48.471 "supported_io_types": { 00:20:48.471 "read": true, 00:20:48.471 "write": true, 00:20:48.471 "unmap": true, 00:20:48.471 "write_zeroes": true, 00:20:48.471 "flush": true, 00:20:48.471 "reset": true, 00:20:48.471 "compare": true, 00:20:48.471 "compare_and_write": false, 00:20:48.471 "abort": true, 00:20:48.471 "nvme_admin": true, 00:20:48.471 "nvme_io": true 00:20:48.471 }, 00:20:48.471 "driver_specific": { 00:20:48.471 "nvme": [ 00:20:48.471 { 00:20:48.471 "pci_address": "0000:00:07.0", 00:20:48.471 "trid": { 00:20:48.471 "trtype": "PCIe", 00:20:48.471 "traddr": "0000:00:07.0" 00:20:48.471 }, 00:20:48.471 "ctrlr_data": { 00:20:48.471 "cntlid": 0, 00:20:48.471 "vendor_id": "0x1b36", 00:20:48.471 "model_number": "QEMU NVMe Ctrl", 00:20:48.471 "serial_number": "12341", 00:20:48.471 "firmware_revision": "8.0.0", 00:20:48.471 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:48.471 "oacs": { 00:20:48.471 "security": 0, 00:20:48.471 "format": 1, 00:20:48.471 "firmware": 0, 00:20:48.471 "ns_manage": 1 00:20:48.471 }, 00:20:48.471 "multi_ctrlr": false, 00:20:48.471 "ana_reporting": false 00:20:48.471 }, 00:20:48.471 "vs": { 00:20:48.471 "nvme_version": "1.4" 00:20:48.471 }, 00:20:48.471 "ns_data": { 00:20:48.471 "id": 1, 00:20:48.471 "can_share": false 00:20:48.471 } 00:20:48.471 } 00:20:48.471 ], 00:20:48.471 "mp_policy": "active_passive" 00:20:48.471 } 00:20:48.471 } 00:20:48.471 ]' 00:20:48.471 14:21:49 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:20:48.471 14:21:49 -- common/autotest_common.sh@1372 -- # bs=4096 00:20:48.471 14:21:49 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:20:48.471 14:21:49 -- common/autotest_common.sh@1373 -- # nb=1310720 00:20:48.471 14:21:49 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:20:48.471 14:21:49 -- common/autotest_common.sh@1377 -- # echo 5120 00:20:48.471 14:21:49 -- ftl/common.sh@63 -- # base_size=5120 00:20:48.471 14:21:49 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:48.471 14:21:49 -- ftl/common.sh@67 -- # clear_lvols 00:20:48.471 14:21:49 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:48.471 14:21:49 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:48.732 14:21:49 -- ftl/common.sh@28 -- # stores=7c5b0691-1bdf-4874-8f45-d6da68affdf2 00:20:48.732 14:21:49 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:48.732 14:21:49 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c5b0691-1bdf-4874-8f45-d6da68affdf2 00:20:48.994 14:21:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:48.994 14:21:50 -- ftl/common.sh@68 -- # lvs=89f14f96-693f-4fdb-89b2-110489ae53a3 00:20:48.994 14:21:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 89f14f96-693f-4fdb-89b2-110489ae53a3 00:20:49.254 14:21:50 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.254 14:21:50 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:20:49.254 14:21:50 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.254 14:21:50 -- ftl/common.sh@35 -- # local name=nvc0 00:20:49.254 14:21:50 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:20:49.254 14:21:50 -- ftl/common.sh@37 -- # local base_bdev=3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.254 14:21:50 -- ftl/common.sh@38 -- # local cache_size= 00:20:49.254 14:21:50 -- ftl/common.sh@41 -- # get_bdev_size 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.254 14:21:50 -- common/autotest_common.sh@1367 -- # local bdev_name=3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.254 14:21:50 -- common/autotest_common.sh@1368 -- # local bdev_info 00:20:49.254 14:21:50 -- common/autotest_common.sh@1369 -- # local bs 00:20:49.254 14:21:50 -- common/autotest_common.sh@1370 -- # local nb 00:20:49.254 14:21:50 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.512 14:21:50 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:20:49.512 { 00:20:49.512 "name": "3590c592-e012-42e0-99c4-41e8cf2390b5", 00:20:49.512 "aliases": [ 00:20:49.512 "lvs/nvme0n1p0" 00:20:49.512 ], 00:20:49.512 "product_name": "Logical Volume", 00:20:49.512 "block_size": 4096, 00:20:49.512 "num_blocks": 26476544, 00:20:49.512 "uuid": "3590c592-e012-42e0-99c4-41e8cf2390b5", 00:20:49.512 "assigned_rate_limits": { 00:20:49.512 "rw_ios_per_sec": 0, 00:20:49.512 "rw_mbytes_per_sec": 0, 00:20:49.512 "r_mbytes_per_sec": 0, 00:20:49.512 "w_mbytes_per_sec": 0 00:20:49.512 }, 00:20:49.512 "claimed": false, 00:20:49.512 "zoned": false, 00:20:49.512 "supported_io_types": { 00:20:49.512 "read": true, 00:20:49.512 "write": true, 00:20:49.512 "unmap": true, 00:20:49.512 "write_zeroes": true, 00:20:49.512 "flush": false, 00:20:49.512 "reset": true, 00:20:49.512 "compare": false, 00:20:49.512 "compare_and_write": false, 00:20:49.512 "abort": false, 00:20:49.512 "nvme_admin": false, 00:20:49.512 "nvme_io": false 00:20:49.512 }, 00:20:49.512 "driver_specific": { 00:20:49.512 "lvol": { 00:20:49.512 "lvol_store_uuid": "89f14f96-693f-4fdb-89b2-110489ae53a3", 00:20:49.512 "base_bdev": "nvme0n1", 00:20:49.512 "thin_provision": true, 00:20:49.512 "snapshot": false, 00:20:49.512 "clone": false, 00:20:49.512 "esnap_clone": false 00:20:49.512 } 00:20:49.512 } 00:20:49.512 } 00:20:49.512 ]' 00:20:49.512 14:21:50 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:20:49.512 14:21:50 -- common/autotest_common.sh@1372 -- # bs=4096 00:20:49.512 14:21:50 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:20:49.512 14:21:50 -- common/autotest_common.sh@1373 -- # nb=26476544 00:20:49.512 14:21:50 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:20:49.512 14:21:50 -- common/autotest_common.sh@1377 -- # echo 103424 00:20:49.512 14:21:50 -- ftl/common.sh@41 -- # local base_size=5171 00:20:49.512 14:21:50 -- ftl/common.sh@44 -- # local nvc_bdev 00:20:49.512 14:21:50 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:20:49.770 14:21:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:49.770 14:21:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:49.770 14:21:51 -- ftl/common.sh@48 -- # get_bdev_size 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.770 14:21:51 -- common/autotest_common.sh@1367 -- # local bdev_name=3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:49.770 14:21:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:20:49.770 14:21:51 -- common/autotest_common.sh@1369 -- # local bs 00:20:49.770 14:21:51 -- common/autotest_common.sh@1370 -- # local nb 00:20:49.770 14:21:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:50.030 14:21:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:20:50.030 { 00:20:50.030 "name": "3590c592-e012-42e0-99c4-41e8cf2390b5", 00:20:50.030 "aliases": [ 00:20:50.030 "lvs/nvme0n1p0" 00:20:50.030 ], 00:20:50.030 "product_name": "Logical Volume", 00:20:50.030 "block_size": 4096, 00:20:50.030 "num_blocks": 26476544, 00:20:50.030 "uuid": "3590c592-e012-42e0-99c4-41e8cf2390b5", 00:20:50.030 "assigned_rate_limits": { 00:20:50.030 "rw_ios_per_sec": 0, 00:20:50.030 "rw_mbytes_per_sec": 0, 00:20:50.030 "r_mbytes_per_sec": 0, 00:20:50.030 "w_mbytes_per_sec": 0 00:20:50.030 }, 00:20:50.030 "claimed": false, 00:20:50.030 "zoned": false, 00:20:50.030 "supported_io_types": { 00:20:50.030 "read": true, 00:20:50.030 "write": true, 00:20:50.030 "unmap": true, 00:20:50.030 "write_zeroes": true, 00:20:50.030 "flush": false, 00:20:50.030 "reset": true, 00:20:50.030 "compare": false, 00:20:50.030 "compare_and_write": false, 00:20:50.030 "abort": false, 00:20:50.030 "nvme_admin": false, 00:20:50.030 "nvme_io": false 00:20:50.030 }, 00:20:50.030 "driver_specific": { 00:20:50.030 "lvol": { 00:20:50.030 "lvol_store_uuid": "89f14f96-693f-4fdb-89b2-110489ae53a3", 00:20:50.030 "base_bdev": "nvme0n1", 00:20:50.030 "thin_provision": true, 00:20:50.030 "snapshot": false, 00:20:50.030 "clone": false, 00:20:50.030 "esnap_clone": false 00:20:50.030 } 00:20:50.030 } 00:20:50.030 } 00:20:50.030 ]' 00:20:50.030 14:21:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:20:50.030 14:21:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:20:50.030 14:21:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:20:50.030 14:21:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:20:50.030 14:21:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:20:50.030 14:21:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:20:50.030 14:21:51 -- ftl/common.sh@48 -- # cache_size=5171 00:20:50.030 14:21:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:50.289 14:21:51 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:50.289 14:21:51 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:50.289 14:21:51 -- common/autotest_common.sh@1367 -- # local bdev_name=3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:50.289 14:21:51 -- common/autotest_common.sh@1368 -- # local bdev_info 00:20:50.289 14:21:51 -- common/autotest_common.sh@1369 -- # local bs 00:20:50.289 14:21:51 -- common/autotest_common.sh@1370 -- # local nb 00:20:50.289 14:21:51 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3590c592-e012-42e0-99c4-41e8cf2390b5 00:20:50.289 14:21:51 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:20:50.289 { 00:20:50.289 "name": "3590c592-e012-42e0-99c4-41e8cf2390b5", 00:20:50.289 "aliases": [ 00:20:50.289 "lvs/nvme0n1p0" 00:20:50.289 ], 00:20:50.289 "product_name": "Logical Volume", 00:20:50.289 "block_size": 4096, 00:20:50.289 "num_blocks": 26476544, 00:20:50.289 "uuid": "3590c592-e012-42e0-99c4-41e8cf2390b5", 00:20:50.289 "assigned_rate_limits": { 00:20:50.289 "rw_ios_per_sec": 0, 00:20:50.289 "rw_mbytes_per_sec": 0, 00:20:50.289 "r_mbytes_per_sec": 0, 00:20:50.289 "w_mbytes_per_sec": 0 00:20:50.289 }, 00:20:50.289 "claimed": false, 00:20:50.289 "zoned": false, 00:20:50.289 "supported_io_types": { 00:20:50.289 "read": true, 00:20:50.289 "write": true, 00:20:50.289 "unmap": true, 00:20:50.289 "write_zeroes": true, 00:20:50.289 "flush": false, 00:20:50.289 "reset": true, 00:20:50.289 "compare": false, 00:20:50.289 "compare_and_write": false, 00:20:50.289 "abort": false, 00:20:50.289 "nvme_admin": false, 00:20:50.289 "nvme_io": false 00:20:50.289 }, 00:20:50.289 "driver_specific": { 00:20:50.289 "lvol": { 00:20:50.289 "lvol_store_uuid": "89f14f96-693f-4fdb-89b2-110489ae53a3", 00:20:50.289 "base_bdev": "nvme0n1", 00:20:50.289 "thin_provision": true, 00:20:50.289 "snapshot": false, 00:20:50.289 "clone": false, 00:20:50.289 "esnap_clone": false 00:20:50.289 } 00:20:50.289 } 00:20:50.289 } 00:20:50.289 ]' 00:20:50.289 14:21:51 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:20:50.289 14:21:51 -- common/autotest_common.sh@1372 -- # bs=4096 00:20:50.289 14:21:51 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:20:50.549 14:21:51 -- common/autotest_common.sh@1373 -- # nb=26476544 00:20:50.549 14:21:51 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:20:50.549 14:21:51 -- common/autotest_common.sh@1377 -- # echo 103424 00:20:50.549 14:21:51 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:50.549 14:21:51 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3590c592-e012-42e0-99c4-41e8cf2390b5 --l2p_dram_limit 10' 00:20:50.549 14:21:51 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:50.549 14:21:51 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:20:50.549 14:21:51 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:50.549 14:21:51 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3590c592-e012-42e0-99c4-41e8cf2390b5 --l2p_dram_limit 10 -c nvc0n1p0 00:20:50.549 [2024-12-04 14:21:51.953771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.953898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.549 [2024-12-04 14:21:51.953917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.549 [2024-12-04 14:21:51.953925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.953972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.953980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.549 [2024-12-04 14:21:51.953988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:50.549 [2024-12-04 14:21:51.953993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.954010] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.549 [2024-12-04 14:21:51.954637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.549 [2024-12-04 14:21:51.954654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.954660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.549 [2024-12-04 14:21:51.954668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:20:50.549 [2024-12-04 14:21:51.954674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.954726] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6f4bde51-aa2b-4599-b89d-de6a76aa5c08 00:20:50.549 [2024-12-04 14:21:51.955677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.955700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:50.549 [2024-12-04 14:21:51.955708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:50.549 [2024-12-04 14:21:51.955715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.960416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.960518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.549 [2024-12-04 14:21:51.960529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:20:50.549 [2024-12-04 14:21:51.960537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.960604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.960613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.549 [2024-12-04 14:21:51.960619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:50.549 [2024-12-04 14:21:51.960628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.960661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.960672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.549 [2024-12-04 14:21:51.960678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.549 [2024-12-04 14:21:51.960684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.960703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.549 [2024-12-04 14:21:51.963617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.963710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.549 [2024-12-04 14:21:51.963724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:20:50.549 [2024-12-04 14:21:51.963730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.963760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.549 [2024-12-04 14:21:51.963766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.549 [2024-12-04 14:21:51.963774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.549 [2024-12-04 14:21:51.963780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.549 [2024-12-04 14:21:51.963799] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:50.549 [2024-12-04 14:21:51.963885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:50.549 [2024-12-04 14:21:51.963897] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.549 [2024-12-04 14:21:51.963905] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:50.549 [2024-12-04 14:21:51.963914] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.549 [2024-12-04 14:21:51.963921] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.550 [2024-12-04 14:21:51.963930] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.550 [2024-12-04 14:21:51.963942] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.550 [2024-12-04 14:21:51.963948] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:50.550 [2024-12-04 14:21:51.963954] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:50.550 [2024-12-04 14:21:51.963961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.550 [2024-12-04 14:21:51.963967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.550 [2024-12-04 14:21:51.963974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:20:50.550 [2024-12-04 14:21:51.963979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.550 [2024-12-04 14:21:51.964027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.550 [2024-12-04 14:21:51.964034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.550 [2024-12-04 14:21:51.964040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:50.550 [2024-12-04 14:21:51.964047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.550 [2024-12-04 14:21:51.964119] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.550 [2024-12-04 14:21:51.964128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.550 [2024-12-04 14:21:51.964135] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.550 [2024-12-04 14:21:51.964152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.550 [2024-12-04 14:21:51.964170] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.550 [2024-12-04 14:21:51.964181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.550 [2024-12-04 14:21:51.964186] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.550 [2024-12-04 14:21:51.964194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.550 [2024-12-04 14:21:51.964201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.550 [2024-12-04 14:21:51.964208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:50.550 [2024-12-04 14:21:51.964212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964220] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.550 [2024-12-04 14:21:51.964226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:50.550 [2024-12-04 14:21:51.964232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:50.550 [2024-12-04 14:21:51.964242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:50.550 [2024-12-04 14:21:51.964248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.550 [2024-12-04 14:21:51.964259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.550 [2024-12-04 14:21:51.964276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.550 [2024-12-04 14:21:51.964292] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.550 [2024-12-04 14:21:51.964310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964321] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.550 [2024-12-04 14:21:51.964326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.550 [2024-12-04 14:21:51.964419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.550 [2024-12-04 14:21:51.964426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:50.550 [2024-12-04 14:21:51.964430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.550 [2024-12-04 14:21:51.964436] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.550 [2024-12-04 14:21:51.964442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.550 [2024-12-04 14:21:51.964448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.550 [2024-12-04 14:21:51.964462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.550 [2024-12-04 14:21:51.964469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.550 [2024-12-04 14:21:51.964475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.550 [2024-12-04 14:21:51.964480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.550 [2024-12-04 14:21:51.964488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.550 [2024-12-04 14:21:51.964492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.550 [2024-12-04 14:21:51.964500] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.550 [2024-12-04 14:21:51.964507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.550 [2024-12-04 14:21:51.964515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.550 [2024-12-04 14:21:51.964521] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:50.550 [2024-12-04 14:21:51.964527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:50.550 [2024-12-04 14:21:51.964533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:50.550 [2024-12-04 14:21:51.964539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:50.550 [2024-12-04 14:21:51.964545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:50.550 [2024-12-04 14:21:51.964551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:50.550 [2024-12-04 14:21:51.964556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:50.550 [2024-12-04 14:21:51.964563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:50.550 [2024-12-04 14:21:51.964568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:50.550 [2024-12-04 14:21:51.964574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:50.550 [2024-12-04 14:21:51.964579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:50.550 [2024-12-04 14:21:51.964589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:50.550 [2024-12-04 14:21:51.964594] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.550 [2024-12-04 14:21:51.964602] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.550 [2024-12-04 14:21:51.964607] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.550 [2024-12-04 14:21:51.964614] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.550 [2024-12-04 14:21:51.964620] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.550 [2024-12-04 14:21:51.964626] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.550 [2024-12-04 14:21:51.964632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.550 [2024-12-04 14:21:51.964638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.550 [2024-12-04 14:21:51.964644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:50.550 [2024-12-04 14:21:51.964650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.550 [2024-12-04 14:21:51.976517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.550 [2024-12-04 14:21:51.976548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.550 [2024-12-04 14:21:51.976556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.825 ms 00:20:50.550 [2024-12-04 14:21:51.976563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.550 [2024-12-04 14:21:51.976631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.550 [2024-12-04 14:21:51.976640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.550 [2024-12-04 14:21:51.976648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:50.550 [2024-12-04 14:21:51.976655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.550 [2024-12-04 14:21:52.000487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.550 [2024-12-04 14:21:52.000515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.550 [2024-12-04 14:21:52.000523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.800 ms 00:20:50.550 [2024-12-04 14:21:52.000531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.550 [2024-12-04 14:21:52.000554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.551 [2024-12-04 14:21:52.000562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.551 [2024-12-04 14:21:52.000568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:50.551 [2024-12-04 14:21:52.000577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.551 [2024-12-04 14:21:52.000868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.551 [2024-12-04 14:21:52.000882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.551 [2024-12-04 14:21:52.000889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:50.551 [2024-12-04 14:21:52.000896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.551 [2024-12-04 14:21:52.000979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.551 [2024-12-04 14:21:52.000989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.551 [2024-12-04 14:21:52.000995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:50.551 [2024-12-04 14:21:52.001001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.812 [2024-12-04 14:21:52.012844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.812 [2024-12-04 14:21:52.012871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.812 [2024-12-04 14:21:52.012878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.828 ms 00:20:50.812 [2024-12-04 14:21:52.012885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.812 [2024-12-04 14:21:52.021843] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.812 [2024-12-04 14:21:52.024122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.812 [2024-12-04 14:21:52.024143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.812 [2024-12-04 14:21:52.024152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.182 ms 00:20:50.812 [2024-12-04 14:21:52.024158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.812 [2024-12-04 14:21:52.085755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.812 [2024-12-04 14:21:52.085790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:50.812 [2024-12-04 14:21:52.085802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.575 ms 00:20:50.812 [2024-12-04 14:21:52.085809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.812 [2024-12-04 14:21:52.085844] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:20:50.812 [2024-12-04 14:21:52.085853] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:54.114 [2024-12-04 14:21:55.103526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.103585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:54.114 [2024-12-04 14:21:55.103602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3017.664 ms 00:20:54.114 [2024-12-04 14:21:55.103611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.114 [2024-12-04 14:21:55.103782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.103792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.114 [2024-12-04 14:21:55.103806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:54.114 [2024-12-04 14:21:55.103813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.114 [2024-12-04 14:21:55.127555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.127687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:54.114 [2024-12-04 14:21:55.127709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.698 ms 00:20:54.114 [2024-12-04 14:21:55.127717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.114 [2024-12-04 14:21:55.150975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.151096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:54.114 [2024-12-04 14:21:55.151119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.231 ms 00:20:54.114 [2024-12-04 14:21:55.151126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.114 [2024-12-04 14:21:55.151421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.151431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.114 [2024-12-04 14:21:55.151440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:54.114 [2024-12-04 14:21:55.151447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.114 [2024-12-04 14:21:55.213891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.213923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:54.114 [2024-12-04 14:21:55.213936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.411 ms 00:20:54.114 [2024-12-04 14:21:55.213944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.114 [2024-12-04 14:21:55.238533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.114 [2024-12-04 14:21:55.238567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:54.115 [2024-12-04 14:21:55.238580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.553 ms 00:20:54.115 [2024-12-04 14:21:55.238587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.115 [2024-12-04 14:21:55.239941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.115 [2024-12-04 14:21:55.239970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:54.115 [2024-12-04 14:21:55.239983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:20:54.115 [2024-12-04 14:21:55.239990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.115 [2024-12-04 14:21:55.263933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.115 [2024-12-04 14:21:55.263971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.115 [2024-12-04 14:21:55.263983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.909 ms 00:20:54.115 [2024-12-04 14:21:55.263990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.115 [2024-12-04 14:21:55.264033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.115 [2024-12-04 14:21:55.264041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.115 [2024-12-04 14:21:55.264051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.115 [2024-12-04 14:21:55.264057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.115 [2024-12-04 14:21:55.264152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.115 [2024-12-04 14:21:55.264163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.115 [2024-12-04 14:21:55.264173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:54.115 [2024-12-04 14:21:55.264181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.115 [2024-12-04 14:21:55.265120] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3310.915 ms, result 0 00:20:54.115 { 00:20:54.115 "name": "ftl0", 00:20:54.115 "uuid": "6f4bde51-aa2b-4599-b89d-de6a76aa5c08" 00:20:54.115 } 00:20:54.115 14:21:55 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:54.115 14:21:55 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:54.115 14:21:55 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:54.115 14:21:55 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:54.115 14:21:55 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:54.375 /dev/nbd0 00:20:54.375 14:21:55 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:54.375 14:21:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:54.375 14:21:55 -- common/autotest_common.sh@867 -- # local i 00:20:54.375 14:21:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:54.375 14:21:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:54.375 14:21:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:54.375 14:21:55 -- common/autotest_common.sh@871 -- # break 00:20:54.375 14:21:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:54.375 14:21:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:54.375 14:21:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:54.375 1+0 records in 00:20:54.375 1+0 records out 00:20:54.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657883 s, 6.2 MB/s 00:20:54.375 14:21:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:54.375 14:21:55 -- common/autotest_common.sh@884 -- # size=4096 00:20:54.375 14:21:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:54.375 14:21:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:54.375 14:21:55 -- common/autotest_common.sh@887 -- # return 0 00:20:54.375 14:21:55 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:54.375 [2024-12-04 14:21:55.760510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:54.375 [2024-12-04 14:21:55.760615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75552 ] 00:20:54.634 [2024-12-04 14:21:55.909181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.635 [2024-12-04 14:21:56.085230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.017  [2024-12-04T14:21:58.420Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-04T14:21:59.356Z] Copying: 392/1024 [MB] (196 MBps) [2024-12-04T14:22:00.730Z] Copying: 613/1024 [MB] (221 MBps) [2024-12-04T14:22:00.989Z] Copying: 869/1024 [MB] (255 MBps) [2024-12-04T14:22:01.594Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:21:00.129 00:21:00.129 14:22:01 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:02.656 14:22:03 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:02.656 [2024-12-04 14:22:03.579851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:02.656 [2024-12-04 14:22:03.579960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75636 ] 00:21:02.656 [2024-12-04 14:22:03.727570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.656 [2024-12-04 14:22:03.904389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.030  [2024-12-04T14:22:06.425Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-04T14:22:07.357Z] Copying: 47/1024 [MB] (27 MBps) [2024-12-04T14:22:08.294Z] Copying: 82/1024 [MB] (35 MBps) [2024-12-04T14:22:09.228Z] Copying: 119/1024 [MB] (36 MBps) [2024-12-04T14:22:10.161Z] Copying: 149/1024 [MB] (30 MBps) [2024-12-04T14:22:11.537Z] Copying: 179/1024 [MB] (30 MBps) [2024-12-04T14:22:12.471Z] Copying: 212/1024 [MB] (32 MBps) [2024-12-04T14:22:13.406Z] Copying: 249/1024 [MB] (37 MBps) [2024-12-04T14:22:14.344Z] Copying: 283/1024 [MB] (34 MBps) [2024-12-04T14:22:15.341Z] Copying: 313/1024 [MB] (30 MBps) [2024-12-04T14:22:16.272Z] Copying: 331/1024 [MB] (17 MBps) [2024-12-04T14:22:17.205Z] Copying: 360/1024 [MB] (28 MBps) [2024-12-04T14:22:18.139Z] Copying: 392/1024 [MB] (32 MBps) [2024-12-04T14:22:19.515Z] Copying: 427/1024 [MB] (35 MBps) [2024-12-04T14:22:20.448Z] Copying: 464/1024 [MB] (36 MBps) [2024-12-04T14:22:21.380Z] Copying: 500/1024 [MB] (36 MBps) [2024-12-04T14:22:22.314Z] Copying: 536/1024 [MB] (36 MBps) [2024-12-04T14:22:23.247Z] Copying: 571/1024 [MB] (35 MBps) [2024-12-04T14:22:24.183Z] Copying: 602/1024 [MB] (30 MBps) [2024-12-04T14:22:25.555Z] Copying: 635/1024 [MB] (33 MBps) [2024-12-04T14:22:26.486Z] Copying: 672/1024 [MB] (36 MBps) [2024-12-04T14:22:27.418Z] Copying: 708/1024 [MB] (36 MBps) [2024-12-04T14:22:28.352Z] Copying: 745/1024 [MB] (36 MBps) [2024-12-04T14:22:29.320Z] Copying: 780/1024 [MB] (35 MBps) [2024-12-04T14:22:30.255Z] Copying: 816/1024 [MB] (35 MBps) [2024-12-04T14:22:31.191Z] Copying: 847/1024 [MB] (30 MBps) [2024-12-04T14:22:32.221Z] Copying: 880/1024 [MB] (33 MBps) [2024-12-04T14:22:33.161Z] Copying: 911/1024 [MB] (30 MBps) [2024-12-04T14:22:34.536Z] Copying: 941/1024 [MB] (30 MBps) [2024-12-04T14:22:35.475Z] Copying: 972/1024 [MB] (30 MBps) [2024-12-04T14:22:35.732Z] Copying: 1006/1024 [MB] (33 MBps) [2024-12-04T14:22:36.298Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:21:34.833 00:21:34.833 14:22:36 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:21:34.833 14:22:36 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:21:35.094 14:22:36 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:35.356 [2024-12-04 14:22:36.642433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.356 [2024-12-04 14:22:36.642483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:35.357 [2024-12-04 14:22:36.642498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:35.357 [2024-12-04 14:22:36.642508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.642532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:35.357 [2024-12-04 14:22:36.645138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.645263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:35.357 [2024-12-04 14:22:36.645282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:21:35.357 [2024-12-04 14:22:36.645290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.647599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.647626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:35.357 [2024-12-04 14:22:36.647643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.279 ms 00:21:35.357 [2024-12-04 14:22:36.647650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.664886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.664995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:35.357 [2024-12-04 14:22:36.665015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.215 ms 00:21:35.357 [2024-12-04 14:22:36.665023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.671155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.671261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:35.357 [2024-12-04 14:22:36.671280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:21:35.357 [2024-12-04 14:22:36.671291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.695544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.695573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:35.357 [2024-12-04 14:22:36.695586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.179 ms 00:21:35.357 [2024-12-04 14:22:36.695593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.710526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.710637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:35.357 [2024-12-04 14:22:36.710657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.897 ms 00:21:35.357 [2024-12-04 14:22:36.710665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.710808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.710819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:35.357 [2024-12-04 14:22:36.710829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:21:35.357 [2024-12-04 14:22:36.710836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.734351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.734379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:35.357 [2024-12-04 14:22:36.734391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.494 ms 00:21:35.357 [2024-12-04 14:22:36.734398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.757784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.757890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:35.357 [2024-12-04 14:22:36.757908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.349 ms 00:21:35.357 [2024-12-04 14:22:36.757915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.780575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.780671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:35.357 [2024-12-04 14:22:36.780688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.628 ms 00:21:35.357 [2024-12-04 14:22:36.780695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.803681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.357 [2024-12-04 14:22:36.803775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:35.357 [2024-12-04 14:22:36.803791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.919 ms 00:21:35.357 [2024-12-04 14:22:36.803798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.357 [2024-12-04 14:22:36.803830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:35.357 [2024-12-04 14:22:36.803843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.803994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:35.357 [2024-12-04 14:22:36.804280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:35.358 [2024-12-04 14:22:36.804714] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:35.358 [2024-12-04 14:22:36.804723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f4bde51-aa2b-4599-b89d-de6a76aa5c08 00:21:35.358 [2024-12-04 14:22:36.804733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:35.358 [2024-12-04 14:22:36.804741] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:35.358 [2024-12-04 14:22:36.804747] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:35.358 [2024-12-04 14:22:36.804760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:35.358 [2024-12-04 14:22:36.804766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:35.358 [2024-12-04 14:22:36.804775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:35.358 [2024-12-04 14:22:36.804782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:35.358 [2024-12-04 14:22:36.804790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:35.358 [2024-12-04 14:22:36.804796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:35.358 [2024-12-04 14:22:36.804805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.358 [2024-12-04 14:22:36.804812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:35.358 [2024-12-04 14:22:36.804822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:21:35.358 [2024-12-04 14:22:36.804829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.358 [2024-12-04 14:22:36.816916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.358 [2024-12-04 14:22:36.816941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:35.358 [2024-12-04 14:22:36.816954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.056 ms 00:21:35.358 [2024-12-04 14:22:36.816963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.358 [2024-12-04 14:22:36.817190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.358 [2024-12-04 14:22:36.817199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:35.358 [2024-12-04 14:22:36.817209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:21:35.358 [2024-12-04 14:22:36.817216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.861145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.861180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.620 [2024-12-04 14:22:36.861192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.861200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.861258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.861266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.620 [2024-12-04 14:22:36.861275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.861282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.861347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.861356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.620 [2024-12-04 14:22:36.861366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.861373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.861390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.861398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.620 [2024-12-04 14:22:36.861407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.861414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.935756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.935794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.620 [2024-12-04 14:22:36.935807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.935815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.620 [2024-12-04 14:22:36.964331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.620 [2024-12-04 14:22:36.964411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.620 [2024-12-04 14:22:36.964483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.620 [2024-12-04 14:22:36.964593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:35.620 [2024-12-04 14:22:36.964648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.620 [2024-12-04 14:22:36.964709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.620 [2024-12-04 14:22:36.964765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.620 [2024-12-04 14:22:36.964774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.620 [2024-12-04 14:22:36.964781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.620 [2024-12-04 14:22:36.964904] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 322.436 ms, result 0 00:21:35.620 true 00:21:35.620 14:22:36 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75408 00:21:35.620 14:22:36 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75408 00:21:35.620 14:22:36 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:21:35.620 [2024-12-04 14:22:37.049037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:35.620 [2024-12-04 14:22:37.049158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75995 ] 00:21:35.881 [2024-12-04 14:22:37.198240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.142 [2024-12-04 14:22:37.370918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.528  [2024-12-04T14:22:39.928Z] Copying: 197/1024 [MB] (197 MBps) [2024-12-04T14:22:40.862Z] Copying: 458/1024 [MB] (261 MBps) [2024-12-04T14:22:41.795Z] Copying: 719/1024 [MB] (260 MBps) [2024-12-04T14:22:41.795Z] Copying: 977/1024 [MB] (257 MBps) [2024-12-04T14:22:42.380Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:21:40.915 00:21:41.174 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75408 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:21:41.174 14:22:42 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:41.174 [2024-12-04 14:22:42.459444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:41.174 [2024-12-04 14:22:42.459554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76054 ] 00:21:41.174 [2024-12-04 14:22:42.599536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.432 [2024-12-04 14:22:42.737154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.690 [2024-12-04 14:22:42.941241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.690 [2024-12-04 14:22:42.941290] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.690 [2024-12-04 14:22:43.000737] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:41.690 [2024-12-04 14:22:43.001028] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:41.690 [2024-12-04 14:22:43.001230] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:41.950 [2024-12-04 14:22:43.173211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.173242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:41.950 [2024-12-04 14:22:43.173252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.950 [2024-12-04 14:22:43.173258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.173290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.173298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.950 [2024-12-04 14:22:43.173306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:41.950 [2024-12-04 14:22:43.173312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.173324] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:41.950 [2024-12-04 14:22:43.173861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:41.950 [2024-12-04 14:22:43.173874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.173879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.950 [2024-12-04 14:22:43.173885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:21:41.950 [2024-12-04 14:22:43.173891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.174980] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:41.950 [2024-12-04 14:22:43.184635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.184664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:41.950 [2024-12-04 14:22:43.184673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.656 ms 00:21:41.950 [2024-12-04 14:22:43.184679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.184718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.184727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:41.950 [2024-12-04 14:22:43.184734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:41.950 [2024-12-04 14:22:43.184739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.188956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.189070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.950 [2024-12-04 14:22:43.189081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:21:41.950 [2024-12-04 14:22:43.189100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.189165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.189172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.950 [2024-12-04 14:22:43.189178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:41.950 [2024-12-04 14:22:43.189183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-12-04 14:22:43.189215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-12-04 14:22:43.189221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:41.951 [2024-12-04 14:22:43.189228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:41.951 [2024-12-04 14:22:43.189233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.951 [2024-12-04 14:22:43.189250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.951 [2024-12-04 14:22:43.191953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.951 [2024-12-04 14:22:43.191975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.951 [2024-12-04 14:22:43.191982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:21:41.951 [2024-12-04 14:22:43.191987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.951 [2024-12-04 14:22:43.192016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.951 [2024-12-04 14:22:43.192023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:41.951 [2024-12-04 14:22:43.192029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:41.951 [2024-12-04 14:22:43.192035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.951 [2024-12-04 14:22:43.192048] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:41.951 [2024-12-04 14:22:43.192062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:41.951 [2024-12-04 14:22:43.192183] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:41.951 [2024-12-04 14:22:43.192223] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:41.951 [2024-12-04 14:22:43.192297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:41.951 [2024-12-04 14:22:43.192379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:41.951 [2024-12-04 14:22:43.192387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:41.951 [2024-12-04 14:22:43.192395] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192403] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192408] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:41.951 [2024-12-04 14:22:43.192414] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:41.951 [2024-12-04 14:22:43.192419] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:41.951 [2024-12-04 14:22:43.192425] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:41.951 [2024-12-04 14:22:43.192433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.951 [2024-12-04 14:22:43.192439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:41.951 [2024-12-04 14:22:43.192445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:21:41.951 [2024-12-04 14:22:43.192450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.951 [2024-12-04 14:22:43.192498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.951 [2024-12-04 14:22:43.192504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:41.951 [2024-12-04 14:22:43.192510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:41.951 [2024-12-04 14:22:43.192515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.951 [2024-12-04 14:22:43.192567] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:41.951 [2024-12-04 14:22:43.192575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:41.951 [2024-12-04 14:22:43.192582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:41.951 [2024-12-04 14:22:43.192599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:41.951 [2024-12-04 14:22:43.192614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.951 [2024-12-04 14:22:43.192624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:41.951 [2024-12-04 14:22:43.192629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:41.951 [2024-12-04 14:22:43.192638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.951 [2024-12-04 14:22:43.192643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:41.951 [2024-12-04 14:22:43.192648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:41.951 [2024-12-04 14:22:43.192653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:41.951 [2024-12-04 14:22:43.192665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:41.951 [2024-12-04 14:22:43.192670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:41.951 [2024-12-04 14:22:43.192680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:41.951 [2024-12-04 14:22:43.192686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:41.951 [2024-12-04 14:22:43.192696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:41.951 [2024-12-04 14:22:43.192711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:41.951 [2024-12-04 14:22:43.192726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:41.951 [2024-12-04 14:22:43.192741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:41.951 [2024-12-04 14:22:43.192755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.951 [2024-12-04 14:22:43.192764] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:41.951 [2024-12-04 14:22:43.192769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:41.951 [2024-12-04 14:22:43.192774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.951 [2024-12-04 14:22:43.192778] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:41.951 [2024-12-04 14:22:43.192783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:41.951 [2024-12-04 14:22:43.192788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.951 [2024-12-04 14:22:43.192799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:41.951 [2024-12-04 14:22:43.192804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:41.951 [2024-12-04 14:22:43.192809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:41.951 [2024-12-04 14:22:43.192814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:41.951 [2024-12-04 14:22:43.192819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:41.951 [2024-12-04 14:22:43.192824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:41.951 [2024-12-04 14:22:43.192829] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:41.951 [2024-12-04 14:22:43.192836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.951 [2024-12-04 14:22:43.192843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:41.951 [2024-12-04 14:22:43.192848] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:41.951 [2024-12-04 14:22:43.192854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:41.951 [2024-12-04 14:22:43.192859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:41.951 [2024-12-04 14:22:43.192864] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:41.951 [2024-12-04 14:22:43.192869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:41.951 [2024-12-04 14:22:43.192874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:41.951 [2024-12-04 14:22:43.192879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:41.951 [2024-12-04 14:22:43.192885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:41.951 [2024-12-04 14:22:43.192890] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:41.951 [2024-12-04 14:22:43.192896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:41.951 [2024-12-04 14:22:43.192901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:41.951 [2024-12-04 14:22:43.192907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:41.951 [2024-12-04 14:22:43.192912] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:41.951 [2024-12-04 14:22:43.192917] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.951 [2024-12-04 14:22:43.192925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:41.952 [2024-12-04 14:22:43.192931] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:41.952 [2024-12-04 14:22:43.192936] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:41.952 [2024-12-04 14:22:43.192942] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:41.952 [2024-12-04 14:22:43.192947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.192953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:41.952 [2024-12-04 14:22:43.192958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:21:41.952 [2024-12-04 14:22:43.192963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.204827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.204911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.952 [2024-12-04 14:22:43.204950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.838 ms 00:21:41.952 [2024-12-04 14:22:43.204967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.205038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.205116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:41.952 [2024-12-04 14:22:43.205155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:41.952 [2024-12-04 14:22:43.205171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.240576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.240680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.952 [2024-12-04 14:22:43.240726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.361 ms 00:21:41.952 [2024-12-04 14:22:43.240745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.240785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.240804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.952 [2024-12-04 14:22:43.240819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:41.952 [2024-12-04 14:22:43.240836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.241154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.241220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.952 [2024-12-04 14:22:43.241260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:41.952 [2024-12-04 14:22:43.241276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.241373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.241616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.952 [2024-12-04 14:22:43.241651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:41.952 [2024-12-04 14:22:43.241668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.252728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.252815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.952 [2024-12-04 14:22:43.252858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.960 ms 00:21:41.952 [2024-12-04 14:22:43.252875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.262407] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:41.952 [2024-12-04 14:22:43.262504] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:41.952 [2024-12-04 14:22:43.262552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.262569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:41.952 [2024-12-04 14:22:43.262584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.592 ms 00:21:41.952 [2024-12-04 14:22:43.262633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.281170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.281257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:41.952 [2024-12-04 14:22:43.281304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.504 ms 00:21:41.952 [2024-12-04 14:22:43.281321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.290102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.290184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:41.952 [2024-12-04 14:22:43.290222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.746 ms 00:21:41.952 [2024-12-04 14:22:43.290246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.299459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.299542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:41.952 [2024-12-04 14:22:43.299580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.182 ms 00:21:41.952 [2024-12-04 14:22:43.299596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.299873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.299929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:41.952 [2024-12-04 14:22:43.299967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:21:41.952 [2024-12-04 14:22:43.299983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.345472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.345639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:41.952 [2024-12-04 14:22:43.345683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.462 ms 00:21:41.952 [2024-12-04 14:22:43.345700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.353987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:41.952 [2024-12-04 14:22:43.356022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.356119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:41.952 [2024-12-04 14:22:43.356168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.276 ms 00:21:41.952 [2024-12-04 14:22:43.356187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.356261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.356315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:41.952 [2024-12-04 14:22:43.356333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.952 [2024-12-04 14:22:43.356347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.356433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.356454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:41.952 [2024-12-04 14:22:43.356602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:41.952 [2024-12-04 14:22:43.356625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.357559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.357634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:41.952 [2024-12-04 14:22:43.357945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:21:41.952 [2024-12-04 14:22:43.357986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.358065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.358095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:41.952 [2024-12-04 14:22:43.358116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.952 [2024-12-04 14:22:43.358156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.358195] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:41.952 [2024-12-04 14:22:43.358291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.358308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:41.952 [2024-12-04 14:22:43.358323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:41.952 [2024-12-04 14:22:43.358338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.376578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.376676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:41.952 [2024-12-04 14:22:43.376726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:21:41.952 [2024-12-04 14:22:43.376744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.376815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.952 [2024-12-04 14:22:43.376836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:41.952 [2024-12-04 14:22:43.376872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:41.952 [2024-12-04 14:22:43.376889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.952 [2024-12-04 14:22:43.377774] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 204.238 ms, result 0 00:21:43.328  [2024-12-04T14:22:45.736Z] Copying: 52/1024 [MB] (52 MBps) [2024-12-04T14:22:46.671Z] Copying: 80/1024 [MB] (27 MBps) [2024-12-04T14:22:47.605Z] Copying: 106/1024 [MB] (26 MBps) [2024-12-04T14:22:48.539Z] Copying: 160/1024 [MB] (53 MBps) [2024-12-04T14:22:49.475Z] Copying: 214/1024 [MB] (54 MBps) [2024-12-04T14:22:50.419Z] Copying: 262/1024 [MB] (48 MBps) [2024-12-04T14:22:51.805Z] Copying: 284/1024 [MB] (21 MBps) [2024-12-04T14:22:52.748Z] Copying: 305/1024 [MB] (21 MBps) [2024-12-04T14:22:53.691Z] Copying: 322/1024 [MB] (17 MBps) [2024-12-04T14:22:54.632Z] Copying: 344/1024 [MB] (21 MBps) [2024-12-04T14:22:55.616Z] Copying: 369/1024 [MB] (25 MBps) [2024-12-04T14:22:56.557Z] Copying: 389/1024 [MB] (19 MBps) [2024-12-04T14:22:57.497Z] Copying: 402/1024 [MB] (12 MBps) [2024-12-04T14:22:58.439Z] Copying: 417/1024 [MB] (15 MBps) [2024-12-04T14:22:59.825Z] Copying: 436/1024 [MB] (18 MBps) [2024-12-04T14:23:00.398Z] Copying: 450/1024 [MB] (14 MBps) [2024-12-04T14:23:01.783Z] Copying: 466/1024 [MB] (16 MBps) [2024-12-04T14:23:02.727Z] Copying: 486/1024 [MB] (19 MBps) [2024-12-04T14:23:03.670Z] Copying: 501/1024 [MB] (15 MBps) [2024-12-04T14:23:04.612Z] Copying: 526/1024 [MB] (24 MBps) [2024-12-04T14:23:05.546Z] Copying: 545/1024 [MB] (18 MBps) [2024-12-04T14:23:06.478Z] Copying: 569/1024 [MB] (24 MBps) [2024-12-04T14:23:07.470Z] Copying: 621/1024 [MB] (51 MBps) [2024-12-04T14:23:08.413Z] Copying: 643/1024 [MB] (22 MBps) [2024-12-04T14:23:09.798Z] Copying: 667/1024 [MB] (23 MBps) [2024-12-04T14:23:10.743Z] Copying: 690/1024 [MB] (23 MBps) [2024-12-04T14:23:11.687Z] Copying: 707/1024 [MB] (17 MBps) [2024-12-04T14:23:12.631Z] Copying: 731/1024 [MB] (24 MBps) [2024-12-04T14:23:13.574Z] Copying: 755/1024 [MB] (23 MBps) [2024-12-04T14:23:14.517Z] Copying: 776/1024 [MB] (21 MBps) [2024-12-04T14:23:15.461Z] Copying: 803/1024 [MB] (27 MBps) [2024-12-04T14:23:16.399Z] Copying: 824/1024 [MB] (21 MBps) [2024-12-04T14:23:17.781Z] Copying: 844/1024 [MB] (19 MBps) [2024-12-04T14:23:18.723Z] Copying: 868/1024 [MB] (24 MBps) [2024-12-04T14:23:19.666Z] Copying: 891/1024 [MB] (22 MBps) [2024-12-04T14:23:20.610Z] Copying: 913/1024 [MB] (22 MBps) [2024-12-04T14:23:21.555Z] Copying: 936/1024 [MB] (22 MBps) [2024-12-04T14:23:22.499Z] Copying: 955/1024 [MB] (18 MBps) [2024-12-04T14:23:23.486Z] Copying: 974/1024 [MB] (19 MBps) [2024-12-04T14:23:24.421Z] Copying: 992/1024 [MB] (17 MBps) [2024-12-04T14:23:25.807Z] Copying: 1012/1024 [MB] (19 MBps) [2024-12-04T14:23:26.068Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-04T14:23:26.068Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-04 14:23:25.835646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.835905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:24.603 [2024-12-04 14:23:25.835933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.603 [2024-12-04 14:23:25.835945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-04 14:23:25.838661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:24.603 [2024-12-04 14:23:25.843259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.843301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:24.603 [2024-12-04 14:23:25.843322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:22:24.603 [2024-12-04 14:23:25.843332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-04 14:23:25.858652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.858700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:24.603 [2024-12-04 14:23:25.858714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.474 ms 00:22:24.603 [2024-12-04 14:23:25.858722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-04 14:23:25.883540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.883587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:24.603 [2024-12-04 14:23:25.883600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.800 ms 00:22:24.603 [2024-12-04 14:23:25.883608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-04 14:23:25.889742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.889832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:24.603 [2024-12-04 14:23:25.889849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.088 ms 00:22:24.603 [2024-12-04 14:23:25.889857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-04 14:23:25.916533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.916712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:24.603 [2024-12-04 14:23:25.916734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.622 ms 00:22:24.603 [2024-12-04 14:23:25.916743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.603 [2024-12-04 14:23:25.932753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.603 [2024-12-04 14:23:25.932798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:24.603 [2024-12-04 14:23:25.932811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.933 ms 00:22:24.603 [2024-12-04 14:23:25.932818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.864 [2024-12-04 14:23:26.180287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.864 [2024-12-04 14:23:26.180329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:24.865 [2024-12-04 14:23:26.180340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 247.419 ms 00:22:24.865 [2024-12-04 14:23:26.180348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.865 [2024-12-04 14:23:26.204835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.865 [2024-12-04 14:23:26.204870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:24.865 [2024-12-04 14:23:26.204880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.469 ms 00:22:24.865 [2024-12-04 14:23:26.204887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.865 [2024-12-04 14:23:26.228853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.865 [2024-12-04 14:23:26.228892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:24.865 [2024-12-04 14:23:26.228902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.931 ms 00:22:24.865 [2024-12-04 14:23:26.228909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.865 [2024-12-04 14:23:26.253281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.865 [2024-12-04 14:23:26.253323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:24.865 [2024-12-04 14:23:26.253335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.334 ms 00:22:24.865 [2024-12-04 14:23:26.253343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.865 [2024-12-04 14:23:26.277732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.865 [2024-12-04 14:23:26.277775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:24.865 [2024-12-04 14:23:26.277787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.306 ms 00:22:24.865 [2024-12-04 14:23:26.277794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.865 [2024-12-04 14:23:26.277835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:24.865 [2024-12-04 14:23:26.277851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 94464 / 261120 wr_cnt: 1 state: open 00:22:24.865 [2024-12-04 14:23:26.277862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.277996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:24.865 [2024-12-04 14:23:26.278482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:24.866 [2024-12-04 14:23:26.278709] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:24.866 [2024-12-04 14:23:26.278720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f4bde51-aa2b-4599-b89d-de6a76aa5c08 00:22:24.866 [2024-12-04 14:23:26.278728] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 94464 00:22:24.866 [2024-12-04 14:23:26.278736] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95424 00:22:24.866 [2024-12-04 14:23:26.278744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 94464 00:22:24.866 [2024-12-04 14:23:26.278762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:22:24.866 [2024-12-04 14:23:26.278770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:24.866 [2024-12-04 14:23:26.278778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:24.866 [2024-12-04 14:23:26.278786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:24.866 [2024-12-04 14:23:26.278793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:24.866 [2024-12-04 14:23:26.278800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:24.866 [2024-12-04 14:23:26.278807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.866 [2024-12-04 14:23:26.278815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:24.866 [2024-12-04 14:23:26.278824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:22:24.866 [2024-12-04 14:23:26.278832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.866 [2024-12-04 14:23:26.291865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.866 [2024-12-04 14:23:26.292030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:24.866 [2024-12-04 14:23:26.292049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.995 ms 00:22:24.866 [2024-12-04 14:23:26.292057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.866 [2024-12-04 14:23:26.292304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.866 [2024-12-04 14:23:26.292315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:24.866 [2024-12-04 14:23:26.292330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:22:24.866 [2024-12-04 14:23:26.292338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.331213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.331256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.127 [2024-12-04 14:23:26.331268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.331277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.331341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.331351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.127 [2024-12-04 14:23:26.331365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.331373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.331447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.331458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.127 [2024-12-04 14:23:26.331467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.331475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.331491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.331500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.127 [2024-12-04 14:23:26.331508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.331520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.411730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.411781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.127 [2024-12-04 14:23:26.411795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.411803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.443545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.443591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.127 [2024-12-04 14:23:26.443609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.443617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.443684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.443694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.127 [2024-12-04 14:23:26.443703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.443712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.443754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.443764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.127 [2024-12-04 14:23:26.443773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.443781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.443883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.443895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.127 [2024-12-04 14:23:26.443904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.443912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.443946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.443956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:25.127 [2024-12-04 14:23:26.443964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.443972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.444014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.444024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.127 [2024-12-04 14:23:26.444032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.444040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.444118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.127 [2024-12-04 14:23:26.444130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.127 [2024-12-04 14:23:26.444138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.127 [2024-12-04 14:23:26.444147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.127 [2024-12-04 14:23:26.444280] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 608.673 ms, result 0 00:22:27.036 00:22:27.036 00:22:27.036 14:23:28 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:28.951 14:23:30 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:29.213 [2024-12-04 14:23:30.440483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:29.213 [2024-12-04 14:23:30.440610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76547 ] 00:22:29.213 [2024-12-04 14:23:30.594735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.473 [2024-12-04 14:23:30.815676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.734 [2024-12-04 14:23:31.103321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.734 [2024-12-04 14:23:31.103405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.996 [2024-12-04 14:23:31.258230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.258490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.996 [2024-12-04 14:23:31.258516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.996 [2024-12-04 14:23:31.258530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.258597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.258608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.996 [2024-12-04 14:23:31.258618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:29.996 [2024-12-04 14:23:31.258626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.258647] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.996 [2024-12-04 14:23:31.259419] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.996 [2024-12-04 14:23:31.259439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.259448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.996 [2024-12-04 14:23:31.259457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:22:29.996 [2024-12-04 14:23:31.259464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.261114] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:29.996 [2024-12-04 14:23:31.275450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.275494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:29.996 [2024-12-04 14:23:31.275507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.338 ms 00:22:29.996 [2024-12-04 14:23:31.275515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.275589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.275599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:29.996 [2024-12-04 14:23:31.275608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:29.996 [2024-12-04 14:23:31.275615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.283639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.283818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.996 [2024-12-04 14:23:31.283837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.945 ms 00:22:29.996 [2024-12-04 14:23:31.283846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.283946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.283955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.996 [2024-12-04 14:23:31.283964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:29.996 [2024-12-04 14:23:31.283972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.284017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.284027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.996 [2024-12-04 14:23:31.284035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:29.996 [2024-12-04 14:23:31.284042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.284074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.996 [2024-12-04 14:23:31.288181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.288218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.996 [2024-12-04 14:23:31.288229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.120 ms 00:22:29.996 [2024-12-04 14:23:31.288237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.288275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.996 [2024-12-04 14:23:31.288283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.996 [2024-12-04 14:23:31.288293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:29.996 [2024-12-04 14:23:31.288303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.996 [2024-12-04 14:23:31.288353] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:29.996 [2024-12-04 14:23:31.288376] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:29.996 [2024-12-04 14:23:31.288411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:29.997 [2024-12-04 14:23:31.288426] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:29.997 [2024-12-04 14:23:31.288506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:29.997 [2024-12-04 14:23:31.288517] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.997 [2024-12-04 14:23:31.288531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:29.997 [2024-12-04 14:23:31.288541] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288551] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288559] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.997 [2024-12-04 14:23:31.288566] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.997 [2024-12-04 14:23:31.288574] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:29.997 [2024-12-04 14:23:31.288583] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:29.997 [2024-12-04 14:23:31.288591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.997 [2024-12-04 14:23:31.288599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.997 [2024-12-04 14:23:31.288607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:22:29.997 [2024-12-04 14:23:31.288614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.997 [2024-12-04 14:23:31.288676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.997 [2024-12-04 14:23:31.288685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.997 [2024-12-04 14:23:31.288693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:29.997 [2024-12-04 14:23:31.288700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.997 [2024-12-04 14:23:31.288770] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.997 [2024-12-04 14:23:31.288780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.997 [2024-12-04 14:23:31.288788] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.997 [2024-12-04 14:23:31.288811] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.997 [2024-12-04 14:23:31.288833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.997 [2024-12-04 14:23:31.288848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.997 [2024-12-04 14:23:31.288855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.997 [2024-12-04 14:23:31.288863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.997 [2024-12-04 14:23:31.288870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.997 [2024-12-04 14:23:31.288878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:29.997 [2024-12-04 14:23:31.288885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288899] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.997 [2024-12-04 14:23:31.288906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:29.997 [2024-12-04 14:23:31.288913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:29.997 [2024-12-04 14:23:31.288926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:29.997 [2024-12-04 14:23:31.288933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.997 [2024-12-04 14:23:31.288947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.997 [2024-12-04 14:23:31.288966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.997 [2024-12-04 14:23:31.288985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.997 [2024-12-04 14:23:31.288991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.997 [2024-12-04 14:23:31.288998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.997 [2024-12-04 14:23:31.289004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:29.997 [2024-12-04 14:23:31.289011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:29.997 [2024-12-04 14:23:31.289017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.997 [2024-12-04 14:23:31.289024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.997 [2024-12-04 14:23:31.289030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.997 [2024-12-04 14:23:31.289037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.997 [2024-12-04 14:23:31.289043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:29.997 [2024-12-04 14:23:31.289050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.997 [2024-12-04 14:23:31.289056] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.997 [2024-12-04 14:23:31.289067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.997 [2024-12-04 14:23:31.289074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.997 [2024-12-04 14:23:31.289082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.997 [2024-12-04 14:23:31.289118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.997 [2024-12-04 14:23:31.289126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.997 [2024-12-04 14:23:31.289133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.997 [2024-12-04 14:23:31.289141] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.997 [2024-12-04 14:23:31.289148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.997 [2024-12-04 14:23:31.289155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.997 [2024-12-04 14:23:31.289164] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.997 [2024-12-04 14:23:31.289173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.997 [2024-12-04 14:23:31.289182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.997 [2024-12-04 14:23:31.289190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:29.997 [2024-12-04 14:23:31.289198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:29.997 [2024-12-04 14:23:31.289207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:29.997 [2024-12-04 14:23:31.289214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:29.997 [2024-12-04 14:23:31.289222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:29.997 [2024-12-04 14:23:31.289230] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:29.997 [2024-12-04 14:23:31.289237] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:29.997 [2024-12-04 14:23:31.289252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:29.997 [2024-12-04 14:23:31.289259] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:29.997 [2024-12-04 14:23:31.289266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:29.997 [2024-12-04 14:23:31.289273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:29.997 [2024-12-04 14:23:31.289281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:29.997 [2024-12-04 14:23:31.289288] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.997 [2024-12-04 14:23:31.289296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.997 [2024-12-04 14:23:31.289304] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.997 [2024-12-04 14:23:31.289311] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.997 [2024-12-04 14:23:31.289318] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.997 [2024-12-04 14:23:31.289326] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.997 [2024-12-04 14:23:31.289333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.997 [2024-12-04 14:23:31.289341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.997 [2024-12-04 14:23:31.289348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:22:29.997 [2024-12-04 14:23:31.289356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.997 [2024-12-04 14:23:31.307424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.997 [2024-12-04 14:23:31.307469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.997 [2024-12-04 14:23:31.307482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.027 ms 00:22:29.997 [2024-12-04 14:23:31.307496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.307587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.307597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.998 [2024-12-04 14:23:31.307606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:29.998 [2024-12-04 14:23:31.307615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.353660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.353717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.998 [2024-12-04 14:23:31.353731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.994 ms 00:22:29.998 [2024-12-04 14:23:31.353740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.353787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.353797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.998 [2024-12-04 14:23:31.353806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:29.998 [2024-12-04 14:23:31.353814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.354414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.354447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.998 [2024-12-04 14:23:31.354458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:22:29.998 [2024-12-04 14:23:31.354471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.354601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.354611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.998 [2024-12-04 14:23:31.354619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:29.998 [2024-12-04 14:23:31.354628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.370977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.371020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.998 [2024-12-04 14:23:31.371031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.326 ms 00:22:29.998 [2024-12-04 14:23:31.371039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.385586] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:29.998 [2024-12-04 14:23:31.385649] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:29.998 [2024-12-04 14:23:31.385663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.385671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:29.998 [2024-12-04 14:23:31.385682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.485 ms 00:22:29.998 [2024-12-04 14:23:31.385689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.412096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.412147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:29.998 [2024-12-04 14:23:31.412159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.345 ms 00:22:29.998 [2024-12-04 14:23:31.412168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.425247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.425426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:29.998 [2024-12-04 14:23:31.425448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.019 ms 00:22:29.998 [2024-12-04 14:23:31.425456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.437932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.437978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:29.998 [2024-12-04 14:23:31.438001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:22:29.998 [2024-12-04 14:23:31.438008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.998 [2024-12-04 14:23:31.438447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.998 [2024-12-04 14:23:31.438463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:29.998 [2024-12-04 14:23:31.438474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:22:29.998 [2024-12-04 14:23:31.438482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.259 [2024-12-04 14:23:31.503773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.259 [2024-12-04 14:23:31.503832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:30.260 [2024-12-04 14:23:31.503848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.272 ms 00:22:30.260 [2024-12-04 14:23:31.503857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.515415] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:30.260 [2024-12-04 14:23:31.518503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.518547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:30.260 [2024-12-04 14:23:31.518559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.584 ms 00:22:30.260 [2024-12-04 14:23:31.518573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.518654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.518666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:30.260 [2024-12-04 14:23:31.518675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:30.260 [2024-12-04 14:23:31.518684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.520155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.520291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:30.260 [2024-12-04 14:23:31.520348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:22:30.260 [2024-12-04 14:23:31.520372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.521696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.521835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:30.260 [2024-12-04 14:23:31.521891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:22:30.260 [2024-12-04 14:23:31.521913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.521964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.521985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:30.260 [2024-12-04 14:23:31.522014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:30.260 [2024-12-04 14:23:31.522034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.522100] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:30.260 [2024-12-04 14:23:31.522114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.522125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:30.260 [2024-12-04 14:23:31.522133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:30.260 [2024-12-04 14:23:31.522140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.548173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.548221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:30.260 [2024-12-04 14:23:31.548235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.008 ms 00:22:30.260 [2024-12-04 14:23:31.548243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.548332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.260 [2024-12-04 14:23:31.548343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:30.260 [2024-12-04 14:23:31.548352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:30.260 [2024-12-04 14:23:31.548360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.260 [2024-12-04 14:23:31.554824] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 294.011 ms, result 0 00:22:31.646  [2024-12-04T14:23:34.056Z] Copying: 1156/1048576 [kB] (1156 kBps) [2024-12-04T14:23:34.995Z] Copying: 3896/1048576 [kB] (2740 kBps) [2024-12-04T14:23:35.956Z] Copying: 12804/1048576 [kB] (8908 kBps) [2024-12-04T14:23:36.897Z] Copying: 52/1024 [MB] (40 MBps) [2024-12-04T14:23:37.899Z] Copying: 68/1024 [MB] (15 MBps) [2024-12-04T14:23:38.845Z] Copying: 93/1024 [MB] (24 MBps) [2024-12-04T14:23:39.790Z] Copying: 113/1024 [MB] (20 MBps) [2024-12-04T14:23:41.174Z] Copying: 145/1024 [MB] (31 MBps) [2024-12-04T14:23:41.749Z] Copying: 176/1024 [MB] (30 MBps) [2024-12-04T14:23:43.137Z] Copying: 209/1024 [MB] (33 MBps) [2024-12-04T14:23:44.082Z] Copying: 240/1024 [MB] (31 MBps) [2024-12-04T14:23:45.026Z] Copying: 268/1024 [MB] (28 MBps) [2024-12-04T14:23:45.967Z] Copying: 298/1024 [MB] (29 MBps) [2024-12-04T14:23:46.913Z] Copying: 328/1024 [MB] (30 MBps) [2024-12-04T14:23:47.858Z] Copying: 357/1024 [MB] (29 MBps) [2024-12-04T14:23:48.800Z] Copying: 385/1024 [MB] (27 MBps) [2024-12-04T14:23:49.744Z] Copying: 413/1024 [MB] (27 MBps) [2024-12-04T14:23:51.130Z] Copying: 440/1024 [MB] (27 MBps) [2024-12-04T14:23:52.078Z] Copying: 461/1024 [MB] (21 MBps) [2024-12-04T14:23:53.088Z] Copying: 478/1024 [MB] (16 MBps) [2024-12-04T14:23:54.031Z] Copying: 511/1024 [MB] (32 MBps) [2024-12-04T14:23:54.972Z] Copying: 527/1024 [MB] (16 MBps) [2024-12-04T14:23:55.911Z] Copying: 546/1024 [MB] (19 MBps) [2024-12-04T14:23:56.851Z] Copying: 569/1024 [MB] (22 MBps) [2024-12-04T14:23:57.797Z] Copying: 590/1024 [MB] (20 MBps) [2024-12-04T14:23:58.741Z] Copying: 606/1024 [MB] (16 MBps) [2024-12-04T14:24:00.131Z] Copying: 622/1024 [MB] (16 MBps) [2024-12-04T14:24:01.074Z] Copying: 639/1024 [MB] (16 MBps) [2024-12-04T14:24:02.018Z] Copying: 657/1024 [MB] (17 MBps) [2024-12-04T14:24:02.963Z] Copying: 675/1024 [MB] (18 MBps) [2024-12-04T14:24:03.909Z] Copying: 694/1024 [MB] (18 MBps) [2024-12-04T14:24:04.856Z] Copying: 712/1024 [MB] (17 MBps) [2024-12-04T14:24:05.799Z] Copying: 729/1024 [MB] (17 MBps) [2024-12-04T14:24:06.762Z] Copying: 759/1024 [MB] (30 MBps) [2024-12-04T14:24:08.155Z] Copying: 789/1024 [MB] (29 MBps) [2024-12-04T14:24:09.097Z] Copying: 821/1024 [MB] (32 MBps) [2024-12-04T14:24:10.039Z] Copying: 854/1024 [MB] (32 MBps) [2024-12-04T14:24:10.984Z] Copying: 880/1024 [MB] (26 MBps) [2024-12-04T14:24:11.930Z] Copying: 911/1024 [MB] (30 MBps) [2024-12-04T14:24:12.876Z] Copying: 939/1024 [MB] (28 MBps) [2024-12-04T14:24:13.822Z] Copying: 971/1024 [MB] (32 MBps) [2024-12-04T14:24:14.766Z] Copying: 998/1024 [MB] (26 MBps) [2024-12-04T14:24:15.336Z] Copying: 1015/1024 [MB] (16 MBps) [2024-12-04T14:24:15.933Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-04 14:24:15.753659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.754048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:14.468 [2024-12-04 14:24:15.754958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:14.468 [2024-12-04 14:24:15.755006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.755081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.468 [2024-12-04 14:24:15.759957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.760129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:14.468 [2024-12-04 14:24:15.760151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.807 ms 00:23:14.468 [2024-12-04 14:24:15.760160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.760445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.760457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:14.468 [2024-12-04 14:24:15.760468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:23:14.468 [2024-12-04 14:24:15.760476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.774253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.774305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:14.468 [2024-12-04 14:24:15.774317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.760 ms 00:23:14.468 [2024-12-04 14:24:15.774326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.780462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.780516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:14.468 [2024-12-04 14:24:15.780527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.085 ms 00:23:14.468 [2024-12-04 14:24:15.780535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.807398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.807577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:14.468 [2024-12-04 14:24:15.807598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.810 ms 00:23:14.468 [2024-12-04 14:24:15.807606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.824152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.824196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:14.468 [2024-12-04 14:24:15.824210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:23:14.468 [2024-12-04 14:24:15.824218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.468 [2024-12-04 14:24:15.834046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.468 [2024-12-04 14:24:15.834117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:14.468 [2024-12-04 14:24:15.834142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.770 ms 00:23:14.468 [2024-12-04 14:24:15.834151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.469 [2024-12-04 14:24:15.860504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.469 [2024-12-04 14:24:15.860553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:14.469 [2024-12-04 14:24:15.860566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:23:14.469 [2024-12-04 14:24:15.860574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.469 [2024-12-04 14:24:15.886383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.469 [2024-12-04 14:24:15.886430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:14.469 [2024-12-04 14:24:15.886443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:23:14.469 [2024-12-04 14:24:15.886463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.469 [2024-12-04 14:24:15.911570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.469 [2024-12-04 14:24:15.911617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:14.469 [2024-12-04 14:24:15.911629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.060 ms 00:23:14.469 [2024-12-04 14:24:15.911636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-04 14:24:15.936953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.731 [2024-12-04 14:24:15.937000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:14.731 [2024-12-04 14:24:15.937012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.218 ms 00:23:14.731 [2024-12-04 14:24:15.937019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.731 [2024-12-04 14:24:15.937063] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:14.731 [2024-12-04 14:24:15.937079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:14.731 [2024-12-04 14:24:15.937109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:23:14.731 [2024-12-04 14:24:15.937117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:14.731 [2024-12-04 14:24:15.937627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:14.732 [2024-12-04 14:24:15.937903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:14.732 [2024-12-04 14:24:15.937911] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f4bde51-aa2b-4599-b89d-de6a76aa5c08 00:23:14.732 [2024-12-04 14:24:15.937920] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:23:14.732 [2024-12-04 14:24:15.937934] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 171968 00:23:14.732 [2024-12-04 14:24:15.937941] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 169984 00:23:14.732 [2024-12-04 14:24:15.937950] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0117 00:23:14.732 [2024-12-04 14:24:15.937958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:14.732 [2024-12-04 14:24:15.937967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:14.732 [2024-12-04 14:24:15.937975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:14.732 [2024-12-04 14:24:15.937981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:14.732 [2024-12-04 14:24:15.937994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:14.732 [2024-12-04 14:24:15.938002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.732 [2024-12-04 14:24:15.938010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:14.732 [2024-12-04 14:24:15.938019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:23:14.732 [2024-12-04 14:24:15.938027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:15.951721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.732 [2024-12-04 14:24:15.951891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:14.732 [2024-12-04 14:24:15.951910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.660 ms 00:23:14.732 [2024-12-04 14:24:15.951918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:15.952172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.732 [2024-12-04 14:24:15.952184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:14.732 [2024-12-04 14:24:15.952193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:23:14.732 [2024-12-04 14:24:15.952207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:15.991286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:15.991330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.732 [2024-12-04 14:24:15.991341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:15.991349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:15.991403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:15.991413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.732 [2024-12-04 14:24:15.991421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:15.991436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:15.991509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:15.991520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.732 [2024-12-04 14:24:15.991528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:15.991536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:15.991551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:15.991559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.732 [2024-12-04 14:24:15.991567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:15.991576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.072891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.072943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.732 [2024-12-04 14:24:16.072954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.072963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.732 [2024-12-04 14:24:16.105340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.732 [2024-12-04 14:24:16.105439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.732 [2024-12-04 14:24:16.105506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.732 [2024-12-04 14:24:16.105638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:14.732 [2024-12-04 14:24:16.105696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.732 [2024-12-04 14:24:16.105767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:14.732 [2024-12-04 14:24:16.105835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.732 [2024-12-04 14:24:16.105843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:14.732 [2024-12-04 14:24:16.105852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.732 [2024-12-04 14:24:16.105985] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.320 ms, result 0 00:23:15.677 00:23:15.677 00:23:15.677 14:24:16 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:18.227 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:18.227 14:24:19 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:18.227 [2024-12-04 14:24:19.223058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:18.227 [2024-12-04 14:24:19.223179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77052 ] 00:23:18.227 [2024-12-04 14:24:19.367920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.227 [2024-12-04 14:24:19.543319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.488 [2024-12-04 14:24:19.793943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.488 [2024-12-04 14:24:19.794002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.488 [2024-12-04 14:24:19.944649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.488 [2024-12-04 14:24:19.944691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.488 [2024-12-04 14:24:19.944704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.488 [2024-12-04 14:24:19.944714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.488 [2024-12-04 14:24:19.944755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.488 [2024-12-04 14:24:19.944765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.488 [2024-12-04 14:24:19.944772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:18.488 [2024-12-04 14:24:19.944780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.488 [2024-12-04 14:24:19.944796] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.488 [2024-12-04 14:24:19.945501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.488 [2024-12-04 14:24:19.945519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.488 [2024-12-04 14:24:19.945526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.488 [2024-12-04 14:24:19.945535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:23:18.488 [2024-12-04 14:24:19.945542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.488 [2024-12-04 14:24:19.946568] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.752 [2024-12-04 14:24:19.959727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.959856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.752 [2024-12-04 14:24:19.959874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.161 ms 00:23:18.752 [2024-12-04 14:24:19.959883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.959929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.959938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.752 [2024-12-04 14:24:19.959946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:18.752 [2024-12-04 14:24:19.959953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.964728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.964758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.752 [2024-12-04 14:24:19.964767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:23:18.752 [2024-12-04 14:24:19.964774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.964856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.964866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.752 [2024-12-04 14:24:19.964874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:18.752 [2024-12-04 14:24:19.964881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.964914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.964927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.752 [2024-12-04 14:24:19.964935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:18.752 [2024-12-04 14:24:19.964942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.964968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:18.752 [2024-12-04 14:24:19.968412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.968436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.752 [2024-12-04 14:24:19.968445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:23:18.752 [2024-12-04 14:24:19.968452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.968481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.968489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.752 [2024-12-04 14:24:19.968496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.752 [2024-12-04 14:24:19.968505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.968524] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.752 [2024-12-04 14:24:19.968541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:18.752 [2024-12-04 14:24:19.968572] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.752 [2024-12-04 14:24:19.968587] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:18.752 [2024-12-04 14:24:19.968658] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:18.752 [2024-12-04 14:24:19.968671] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.752 [2024-12-04 14:24:19.968683] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:18.752 [2024-12-04 14:24:19.968692] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.752 [2024-12-04 14:24:19.968701] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.752 [2024-12-04 14:24:19.968709] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:18.752 [2024-12-04 14:24:19.968715] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.752 [2024-12-04 14:24:19.968722] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:18.752 [2024-12-04 14:24:19.968728] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:18.752 [2024-12-04 14:24:19.968736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.968743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.752 [2024-12-04 14:24:19.968751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:23:18.752 [2024-12-04 14:24:19.968758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.968818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.752 [2024-12-04 14:24:19.968825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.752 [2024-12-04 14:24:19.968832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:18.752 [2024-12-04 14:24:19.968839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.752 [2024-12-04 14:24:19.968916] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.752 [2024-12-04 14:24:19.968926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.752 [2024-12-04 14:24:19.968934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.752 [2024-12-04 14:24:19.968941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.752 [2024-12-04 14:24:19.968948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.752 [2024-12-04 14:24:19.968955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.752 [2024-12-04 14:24:19.968962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:18.752 [2024-12-04 14:24:19.968970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.752 [2024-12-04 14:24:19.968977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:18.752 [2024-12-04 14:24:19.968983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.753 [2024-12-04 14:24:19.968992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.753 [2024-12-04 14:24:19.968999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:18.753 [2024-12-04 14:24:19.969006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.753 [2024-12-04 14:24:19.969012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.753 [2024-12-04 14:24:19.969019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:18.753 [2024-12-04 14:24:19.969025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.753 [2024-12-04 14:24:19.969043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:18.753 [2024-12-04 14:24:19.969049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:18.753 [2024-12-04 14:24:19.969062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:18.753 [2024-12-04 14:24:19.969069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.753 [2024-12-04 14:24:19.969082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.753 [2024-12-04 14:24:19.969120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969134] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.753 [2024-12-04 14:24:19.969140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969153] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.753 [2024-12-04 14:24:19.969160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.753 [2024-12-04 14:24:19.969180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.753 [2024-12-04 14:24:19.969193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.753 [2024-12-04 14:24:19.969199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:18.753 [2024-12-04 14:24:19.969206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.753 [2024-12-04 14:24:19.969212] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.753 [2024-12-04 14:24:19.969222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.753 [2024-12-04 14:24:19.969230] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.753 [2024-12-04 14:24:19.969246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.753 [2024-12-04 14:24:19.969253] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.753 [2024-12-04 14:24:19.969259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.753 [2024-12-04 14:24:19.969266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.753 [2024-12-04 14:24:19.969273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.753 [2024-12-04 14:24:19.969280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.753 [2024-12-04 14:24:19.969287] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.753 [2024-12-04 14:24:19.969296] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.753 [2024-12-04 14:24:19.969304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:18.753 [2024-12-04 14:24:19.969311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:18.753 [2024-12-04 14:24:19.969318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:18.753 [2024-12-04 14:24:19.969325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:18.753 [2024-12-04 14:24:19.969332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:18.753 [2024-12-04 14:24:19.969339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:18.753 [2024-12-04 14:24:19.969346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:18.753 [2024-12-04 14:24:19.969353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:18.753 [2024-12-04 14:24:19.969360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:18.753 [2024-12-04 14:24:19.969367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:18.753 [2024-12-04 14:24:19.969374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:18.753 [2024-12-04 14:24:19.969380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:18.753 [2024-12-04 14:24:19.969388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:18.753 [2024-12-04 14:24:19.969395] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.753 [2024-12-04 14:24:19.969403] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.753 [2024-12-04 14:24:19.969411] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.753 [2024-12-04 14:24:19.969417] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.753 [2024-12-04 14:24:19.969425] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.753 [2024-12-04 14:24:19.969433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.753 [2024-12-04 14:24:19.969440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.753 [2024-12-04 14:24:19.969446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.753 [2024-12-04 14:24:19.969454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:23:18.753 [2024-12-04 14:24:19.969462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.753 [2024-12-04 14:24:19.984147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.753 [2024-12-04 14:24:19.984174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.753 [2024-12-04 14:24:19.984184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.644 ms 00:23:18.753 [2024-12-04 14:24:19.984196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.753 [2024-12-04 14:24:19.984276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.753 [2024-12-04 14:24:19.984284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:18.753 [2024-12-04 14:24:19.984293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:18.754 [2024-12-04 14:24:19.984300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.025168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.025208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.754 [2024-12-04 14:24:20.025220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.828 ms 00:23:18.754 [2024-12-04 14:24:20.025228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.025264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.025273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.754 [2024-12-04 14:24:20.025281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:18.754 [2024-12-04 14:24:20.025289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.025632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.025646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.754 [2024-12-04 14:24:20.025655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:23:18.754 [2024-12-04 14:24:20.025666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.025774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.025783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.754 [2024-12-04 14:24:20.025792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:18.754 [2024-12-04 14:24:20.025798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.039452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.039482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.754 [2024-12-04 14:24:20.039492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.635 ms 00:23:18.754 [2024-12-04 14:24:20.039499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.052304] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:18.754 [2024-12-04 14:24:20.052428] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:18.754 [2024-12-04 14:24:20.052443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.052450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:18.754 [2024-12-04 14:24:20.052459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.857 ms 00:23:18.754 [2024-12-04 14:24:20.052465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.076810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.076841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:18.754 [2024-12-04 14:24:20.076852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.314 ms 00:23:18.754 [2024-12-04 14:24:20.076860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.088675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.088703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:18.754 [2024-12-04 14:24:20.088712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.779 ms 00:23:18.754 [2024-12-04 14:24:20.088719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.100273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.100306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:18.754 [2024-12-04 14:24:20.100316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.523 ms 00:23:18.754 [2024-12-04 14:24:20.100322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.100667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.100678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:18.754 [2024-12-04 14:24:20.100686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:18.754 [2024-12-04 14:24:20.100693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.158120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.158157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:18.754 [2024-12-04 14:24:20.158180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.411 ms 00:23:18.754 [2024-12-04 14:24:20.158188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.168850] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:18.754 [2024-12-04 14:24:20.170996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.171023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:18.754 [2024-12-04 14:24:20.171034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:23:18.754 [2024-12-04 14:24:20.171047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.171119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.171131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:18.754 [2024-12-04 14:24:20.171141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.754 [2024-12-04 14:24:20.171149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.171689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.171705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:18.754 [2024-12-04 14:24:20.171714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:23:18.754 [2024-12-04 14:24:20.171722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.172904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.172932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:18.754 [2024-12-04 14:24:20.172941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:23:18.754 [2024-12-04 14:24:20.172948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.172974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.172981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:18.754 [2024-12-04 14:24:20.172993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.754 [2024-12-04 14:24:20.173000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.173031] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:18.754 [2024-12-04 14:24:20.173040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.173050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:18.754 [2024-12-04 14:24:20.173057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.754 [2024-12-04 14:24:20.173064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.196582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.196613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:18.754 [2024-12-04 14:24:20.196624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.502 ms 00:23:18.754 [2024-12-04 14:24:20.196631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.196698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.754 [2024-12-04 14:24:20.196707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:18.754 [2024-12-04 14:24:20.196715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:18.754 [2024-12-04 14:24:20.196723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.754 [2024-12-04 14:24:20.197587] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 252.535 ms, result 0 00:23:20.145  [2024-12-04T14:24:22.593Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-04T14:24:23.553Z] Copying: 24/1024 [MB] (10 MBps) [2024-12-04T14:24:24.497Z] Copying: 42/1024 [MB] (18 MBps) [2024-12-04T14:24:25.442Z] Copying: 54/1024 [MB] (11 MBps) [2024-12-04T14:24:26.383Z] Copying: 66/1024 [MB] (12 MBps) [2024-12-04T14:24:27.772Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-04T14:24:28.718Z] Copying: 105/1024 [MB] (16 MBps) [2024-12-04T14:24:29.663Z] Copying: 127/1024 [MB] (22 MBps) [2024-12-04T14:24:30.608Z] Copying: 149/1024 [MB] (21 MBps) [2024-12-04T14:24:31.556Z] Copying: 163/1024 [MB] (14 MBps) [2024-12-04T14:24:32.503Z] Copying: 181/1024 [MB] (18 MBps) [2024-12-04T14:24:33.447Z] Copying: 195/1024 [MB] (13 MBps) [2024-12-04T14:24:34.391Z] Copying: 215/1024 [MB] (20 MBps) [2024-12-04T14:24:35.776Z] Copying: 234/1024 [MB] (18 MBps) [2024-12-04T14:24:36.721Z] Copying: 251/1024 [MB] (17 MBps) [2024-12-04T14:24:37.667Z] Copying: 267/1024 [MB] (16 MBps) [2024-12-04T14:24:38.624Z] Copying: 284/1024 [MB] (16 MBps) [2024-12-04T14:24:39.587Z] Copying: 298/1024 [MB] (14 MBps) [2024-12-04T14:24:40.532Z] Copying: 320/1024 [MB] (21 MBps) [2024-12-04T14:24:41.475Z] Copying: 331/1024 [MB] (10 MBps) [2024-12-04T14:24:42.418Z] Copying: 342/1024 [MB] (10 MBps) [2024-12-04T14:24:43.805Z] Copying: 352/1024 [MB] (10 MBps) [2024-12-04T14:24:44.377Z] Copying: 362/1024 [MB] (10 MBps) [2024-12-04T14:24:45.759Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-04T14:24:46.698Z] Copying: 383/1024 [MB] (10 MBps) [2024-12-04T14:24:47.642Z] Copying: 393/1024 [MB] (10 MBps) [2024-12-04T14:24:48.587Z] Copying: 404/1024 [MB] (10 MBps) [2024-12-04T14:24:49.535Z] Copying: 424/1024 [MB] (19 MBps) [2024-12-04T14:24:50.482Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-04T14:24:51.427Z] Copying: 457/1024 [MB] (19 MBps) [2024-12-04T14:24:52.381Z] Copying: 471/1024 [MB] (14 MBps) [2024-12-04T14:24:53.830Z] Copying: 482/1024 [MB] (10 MBps) [2024-12-04T14:24:54.405Z] Copying: 493/1024 [MB] (10 MBps) [2024-12-04T14:24:55.793Z] Copying: 503/1024 [MB] (10 MBps) [2024-12-04T14:24:56.739Z] Copying: 514/1024 [MB] (10 MBps) [2024-12-04T14:24:57.685Z] Copying: 524/1024 [MB] (10 MBps) [2024-12-04T14:24:58.631Z] Copying: 534/1024 [MB] (10 MBps) [2024-12-04T14:24:59.577Z] Copying: 545/1024 [MB] (11 MBps) [2024-12-04T14:25:00.523Z] Copying: 557/1024 [MB] (11 MBps) [2024-12-04T14:25:01.467Z] Copying: 568/1024 [MB] (10 MBps) [2024-12-04T14:25:02.412Z] Copying: 578/1024 [MB] (10 MBps) [2024-12-04T14:25:03.803Z] Copying: 588/1024 [MB] (10 MBps) [2024-12-04T14:25:04.374Z] Copying: 599/1024 [MB] (10 MBps) [2024-12-04T14:25:05.761Z] Copying: 620/1024 [MB] (20 MBps) [2024-12-04T14:25:06.705Z] Copying: 644/1024 [MB] (24 MBps) [2024-12-04T14:25:07.647Z] Copying: 665/1024 [MB] (20 MBps) [2024-12-04T14:25:08.638Z] Copying: 678/1024 [MB] (12 MBps) [2024-12-04T14:25:09.608Z] Copying: 698/1024 [MB] (20 MBps) [2024-12-04T14:25:10.554Z] Copying: 714/1024 [MB] (16 MBps) [2024-12-04T14:25:11.500Z] Copying: 726/1024 [MB] (11 MBps) [2024-12-04T14:25:12.444Z] Copying: 738/1024 [MB] (12 MBps) [2024-12-04T14:25:13.390Z] Copying: 751/1024 [MB] (12 MBps) [2024-12-04T14:25:14.780Z] Copying: 765/1024 [MB] (13 MBps) [2024-12-04T14:25:15.720Z] Copying: 780/1024 [MB] (15 MBps) [2024-12-04T14:25:16.662Z] Copying: 791/1024 [MB] (11 MBps) [2024-12-04T14:25:17.607Z] Copying: 803/1024 [MB] (12 MBps) [2024-12-04T14:25:18.582Z] Copying: 819/1024 [MB] (15 MBps) [2024-12-04T14:25:19.523Z] Copying: 832/1024 [MB] (13 MBps) [2024-12-04T14:25:20.465Z] Copying: 854/1024 [MB] (21 MBps) [2024-12-04T14:25:21.407Z] Copying: 866/1024 [MB] (12 MBps) [2024-12-04T14:25:22.790Z] Copying: 887/1024 [MB] (21 MBps) [2024-12-04T14:25:23.735Z] Copying: 908/1024 [MB] (20 MBps) [2024-12-04T14:25:24.678Z] Copying: 925/1024 [MB] (16 MBps) [2024-12-04T14:25:25.630Z] Copying: 942/1024 [MB] (17 MBps) [2024-12-04T14:25:26.572Z] Copying: 958/1024 [MB] (16 MBps) [2024-12-04T14:25:27.517Z] Copying: 975/1024 [MB] (16 MBps) [2024-12-04T14:25:28.457Z] Copying: 993/1024 [MB] (18 MBps) [2024-12-04T14:25:29.399Z] Copying: 1005/1024 [MB] (11 MBps) [2024-12-04T14:25:30.343Z] Copying: 1016/1024 [MB] (11 MBps) [2024-12-04T14:25:30.343Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-04 14:25:30.071533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.071637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.878 [2024-12-04 14:25:30.071663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:28.878 [2024-12-04 14:25:30.071678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.071720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.878 [2024-12-04 14:25:30.076200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.076278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.878 [2024-12-04 14:25:30.076296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.451 ms 00:24:28.878 [2024-12-04 14:25:30.076309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.076704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.076731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.878 [2024-12-04 14:25:30.076750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:24:28.878 [2024-12-04 14:25:30.076762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.083529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.083600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.878 [2024-12-04 14:25:30.083632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.738 ms 00:24:28.878 [2024-12-04 14:25:30.083646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.092730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.092796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:28.878 [2024-12-04 14:25:30.092812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.039 ms 00:24:28.878 [2024-12-04 14:25:30.092826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.120681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.120734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.878 [2024-12-04 14:25:30.120748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.728 ms 00:24:28.878 [2024-12-04 14:25:30.120757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.137573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.137623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.878 [2024-12-04 14:25:30.137636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.764 ms 00:24:28.878 [2024-12-04 14:25:30.137652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.146411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.146592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.878 [2024-12-04 14:25:30.146615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.704 ms 00:24:28.878 [2024-12-04 14:25:30.146623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.173983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.174183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:28.878 [2024-12-04 14:25:30.174207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.337 ms 00:24:28.878 [2024-12-04 14:25:30.174215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.199808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.199854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:28.878 [2024-12-04 14:25:30.199879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.552 ms 00:24:28.878 [2024-12-04 14:25:30.199887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.225608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.225788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.878 [2024-12-04 14:25:30.225810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.675 ms 00:24:28.878 [2024-12-04 14:25:30.225818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.251012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.878 [2024-12-04 14:25:30.251058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.878 [2024-12-04 14:25:30.251070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.010 ms 00:24:28.878 [2024-12-04 14:25:30.251078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.878 [2024-12-04 14:25:30.251138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.878 [2024-12-04 14:25:30.251162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:28.878 [2024-12-04 14:25:30.251173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:24:28.878 [2024-12-04 14:25:30.251182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.878 [2024-12-04 14:25:30.251284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.879 [2024-12-04 14:25:30.251828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.880 [2024-12-04 14:25:30.251959] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.880 [2024-12-04 14:25:30.251967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f4bde51-aa2b-4599-b89d-de6a76aa5c08 00:24:28.880 [2024-12-04 14:25:30.251976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:24:28.880 [2024-12-04 14:25:30.251983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:28.880 [2024-12-04 14:25:30.251990] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:28.880 [2024-12-04 14:25:30.252000] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:28.880 [2024-12-04 14:25:30.252008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.880 [2024-12-04 14:25:30.252016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.880 [2024-12-04 14:25:30.252025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.880 [2024-12-04 14:25:30.252040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.880 [2024-12-04 14:25:30.252047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.880 [2024-12-04 14:25:30.252055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.880 [2024-12-04 14:25:30.252062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.880 [2024-12-04 14:25:30.252075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:24:28.880 [2024-12-04 14:25:30.252084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.880 [2024-12-04 14:25:30.265819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.880 [2024-12-04 14:25:30.265858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.880 [2024-12-04 14:25:30.265870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.687 ms 00:24:28.880 [2024-12-04 14:25:30.265878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.880 [2024-12-04 14:25:30.266133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.880 [2024-12-04 14:25:30.266144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.880 [2024-12-04 14:25:30.266153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:28.880 [2024-12-04 14:25:30.266161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.880 [2024-12-04 14:25:30.305284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.880 [2024-12-04 14:25:30.305325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.880 [2024-12-04 14:25:30.305336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.880 [2024-12-04 14:25:30.305346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.880 [2024-12-04 14:25:30.305408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.880 [2024-12-04 14:25:30.305417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.880 [2024-12-04 14:25:30.305425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.880 [2024-12-04 14:25:30.305434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.880 [2024-12-04 14:25:30.305511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.880 [2024-12-04 14:25:30.305523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.880 [2024-12-04 14:25:30.305532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.880 [2024-12-04 14:25:30.305541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.880 [2024-12-04 14:25:30.305557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.880 [2024-12-04 14:25:30.305570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.880 [2024-12-04 14:25:30.305578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.880 [2024-12-04 14:25:30.305586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.159 [2024-12-04 14:25:30.391599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.159 [2024-12-04 14:25:30.391657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.159 [2024-12-04 14:25:30.391671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.159 [2024-12-04 14:25:30.391681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.423321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.423524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.160 [2024-12-04 14:25:30.423545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.423554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.423629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.423640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.160 [2024-12-04 14:25:30.423648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.423656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.423699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.423709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.160 [2024-12-04 14:25:30.423724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.423733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.423842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.423853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.160 [2024-12-04 14:25:30.423862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.423871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.423901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.423910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:29.160 [2024-12-04 14:25:30.423919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.423931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.423972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.423981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.160 [2024-12-04 14:25:30.423991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.424000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.424047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.160 [2024-12-04 14:25:30.424058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.160 [2024-12-04 14:25:30.424069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.160 [2024-12-04 14:25:30.424078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.160 [2024-12-04 14:25:30.424245] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.695 ms, result 0 00:24:30.104 00:24:30.104 00:24:30.104 14:25:31 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:32.651 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:32.651 Process with pid 75408 is not found 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75408 00:24:32.651 14:25:33 -- common/autotest_common.sh@936 -- # '[' -z 75408 ']' 00:24:32.651 14:25:33 -- common/autotest_common.sh@940 -- # kill -0 75408 00:24:32.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75408) - No such process 00:24:32.651 14:25:33 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75408 is not found' 00:24:32.651 14:25:33 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:32.651 Remove shared memory files 00:24:32.651 14:25:34 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:32.651 14:25:34 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:32.651 14:25:34 -- ftl/common.sh@205 -- # rm -f rm -f 00:24:32.651 14:25:34 -- ftl/common.sh@206 -- # rm -f rm -f 00:24:32.651 14:25:34 -- ftl/common.sh@207 -- # rm -f rm -f 00:24:32.651 14:25:34 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:32.651 14:25:34 -- ftl/common.sh@209 -- # rm -f rm -f 00:24:32.912 ************************************ 00:24:32.912 END TEST ftl_dirty_shutdown 00:24:32.912 ************************************ 00:24:32.913 00:24:32.913 real 3m46.567s 00:24:32.913 user 4m1.288s 00:24:32.913 sys 0m22.979s 00:24:32.913 14:25:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:32.913 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:32.913 14:25:34 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:24:32.913 14:25:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:32.913 14:25:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:32.913 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:32.913 ************************************ 00:24:32.913 START TEST ftl_upgrade_shutdown 00:24:32.913 ************************************ 00:24:32.913 14:25:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:24:32.913 * Looking for test storage... 00:24:32.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.913 14:25:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:32.913 14:25:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:32.913 14:25:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:32.913 14:25:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:32.913 14:25:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:32.913 14:25:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:32.913 14:25:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:32.913 14:25:34 -- scripts/common.sh@335 -- # IFS=.-: 00:24:32.913 14:25:34 -- scripts/common.sh@335 -- # read -ra ver1 00:24:32.913 14:25:34 -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.913 14:25:34 -- scripts/common.sh@336 -- # read -ra ver2 00:24:32.913 14:25:34 -- scripts/common.sh@337 -- # local 'op=<' 00:24:32.913 14:25:34 -- scripts/common.sh@339 -- # ver1_l=2 00:24:32.913 14:25:34 -- scripts/common.sh@340 -- # ver2_l=1 00:24:32.913 14:25:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:32.913 14:25:34 -- scripts/common.sh@343 -- # case "$op" in 00:24:32.913 14:25:34 -- scripts/common.sh@344 -- # : 1 00:24:32.913 14:25:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:32.913 14:25:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.913 14:25:34 -- scripts/common.sh@364 -- # decimal 1 00:24:32.913 14:25:34 -- scripts/common.sh@352 -- # local d=1 00:24:32.913 14:25:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.913 14:25:34 -- scripts/common.sh@354 -- # echo 1 00:24:32.913 14:25:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:32.913 14:25:34 -- scripts/common.sh@365 -- # decimal 2 00:24:32.913 14:25:34 -- scripts/common.sh@352 -- # local d=2 00:24:32.913 14:25:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.913 14:25:34 -- scripts/common.sh@354 -- # echo 2 00:24:32.913 14:25:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:32.913 14:25:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:32.913 14:25:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:32.913 14:25:34 -- scripts/common.sh@367 -- # return 0 00:24:32.913 14:25:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.913 14:25:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:32.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.913 --rc genhtml_branch_coverage=1 00:24:32.913 --rc genhtml_function_coverage=1 00:24:32.913 --rc genhtml_legend=1 00:24:32.913 --rc geninfo_all_blocks=1 00:24:32.913 --rc geninfo_unexecuted_blocks=1 00:24:32.913 00:24:32.913 ' 00:24:32.913 14:25:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:32.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.913 --rc genhtml_branch_coverage=1 00:24:32.913 --rc genhtml_function_coverage=1 00:24:32.913 --rc genhtml_legend=1 00:24:32.913 --rc geninfo_all_blocks=1 00:24:32.913 --rc geninfo_unexecuted_blocks=1 00:24:32.913 00:24:32.913 ' 00:24:32.913 14:25:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:32.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.913 --rc genhtml_branch_coverage=1 00:24:32.913 --rc genhtml_function_coverage=1 00:24:32.913 --rc genhtml_legend=1 00:24:32.913 --rc geninfo_all_blocks=1 00:24:32.913 --rc geninfo_unexecuted_blocks=1 00:24:32.913 00:24:32.913 ' 00:24:32.913 14:25:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:32.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.913 --rc genhtml_branch_coverage=1 00:24:32.913 --rc genhtml_function_coverage=1 00:24:32.913 --rc genhtml_legend=1 00:24:32.913 --rc geninfo_all_blocks=1 00:24:32.913 --rc geninfo_unexecuted_blocks=1 00:24:32.913 00:24:32.913 ' 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:32.913 14:25:34 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:32.913 14:25:34 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.913 14:25:34 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.913 14:25:34 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:32.913 14:25:34 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:32.913 14:25:34 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:32.913 14:25:34 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:32.913 14:25:34 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:32.913 14:25:34 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.913 14:25:34 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.913 14:25:34 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:32.913 14:25:34 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:32.913 14:25:34 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:32.913 14:25:34 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:32.913 14:25:34 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:32.913 14:25:34 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:32.913 14:25:34 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.913 14:25:34 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.913 14:25:34 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:32.913 14:25:34 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:32.913 14:25:34 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:32.913 14:25:34 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:32.913 14:25:34 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:32.913 14:25:34 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:32.913 14:25:34 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:32.913 14:25:34 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:32.913 14:25:34 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.913 14:25:34 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:32.913 14:25:34 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:24:32.914 14:25:34 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:24:32.914 14:25:34 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:32.914 14:25:34 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:32.914 14:25:34 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:32.914 14:25:34 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:32.914 14:25:34 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:32.914 14:25:34 -- ftl/common.sh@81 -- # local base_bdev= 00:24:32.914 14:25:34 -- ftl/common.sh@82 -- # local cache_bdev= 00:24:32.914 14:25:34 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:32.914 14:25:34 -- ftl/common.sh@89 -- # spdk_tgt_pid=77890 00:24:32.914 14:25:34 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:32.914 14:25:34 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:32.914 14:25:34 -- ftl/common.sh@91 -- # waitforlisten 77890 00:24:32.914 14:25:34 -- common/autotest_common.sh@829 -- # '[' -z 77890 ']' 00:24:32.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.914 14:25:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.914 14:25:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.914 14:25:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.914 14:25:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.914 14:25:34 -- common/autotest_common.sh@10 -- # set +x 00:24:33.175 [2024-12-04 14:25:34.450033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:33.175 [2024-12-04 14:25:34.450457] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77890 ] 00:24:33.175 [2024-12-04 14:25:34.603501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.437 [2024-12-04 14:25:34.831218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:33.437 [2024-12-04 14:25:34.831602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.827 14:25:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.827 14:25:35 -- common/autotest_common.sh@862 -- # return 0 00:24:34.827 14:25:35 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:34.827 14:25:35 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:34.827 14:25:35 -- ftl/common.sh@99 -- # local params 00:24:34.827 14:25:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:34.827 14:25:35 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:34.827 14:25:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:34.827 14:25:35 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:24:34.827 14:25:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:34.827 14:25:35 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:34.827 14:25:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:34.827 14:25:35 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:24:34.827 14:25:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:34.827 14:25:35 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:34.827 14:25:35 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:34.827 14:25:35 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:34.827 14:25:35 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:24:34.827 14:25:35 -- ftl/common.sh@54 -- # local name=base 00:24:34.827 14:25:35 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:24:34.827 14:25:35 -- ftl/common.sh@56 -- # local size=20480 00:24:34.827 14:25:35 -- ftl/common.sh@59 -- # local base_bdev 00:24:34.827 14:25:35 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:24:34.827 14:25:36 -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:34.827 14:25:36 -- ftl/common.sh@62 -- # local base_size 00:24:34.827 14:25:36 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:34.827 14:25:36 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:24:34.827 14:25:36 -- common/autotest_common.sh@1368 -- # local bdev_info 00:24:34.827 14:25:36 -- common/autotest_common.sh@1369 -- # local bs 00:24:34.827 14:25:36 -- common/autotest_common.sh@1370 -- # local nb 00:24:34.827 14:25:36 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:35.090 14:25:36 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:24:35.090 { 00:24:35.090 "name": "basen1", 00:24:35.090 "aliases": [ 00:24:35.090 "9e954ff2-14aa-4e00-849d-941fb5913799" 00:24:35.090 ], 00:24:35.090 "product_name": "NVMe disk", 00:24:35.090 "block_size": 4096, 00:24:35.090 "num_blocks": 1310720, 00:24:35.090 "uuid": "9e954ff2-14aa-4e00-849d-941fb5913799", 00:24:35.090 "assigned_rate_limits": { 00:24:35.090 "rw_ios_per_sec": 0, 00:24:35.090 "rw_mbytes_per_sec": 0, 00:24:35.090 "r_mbytes_per_sec": 0, 00:24:35.090 "w_mbytes_per_sec": 0 00:24:35.090 }, 00:24:35.090 "claimed": true, 00:24:35.090 "claim_type": "read_many_write_one", 00:24:35.090 "zoned": false, 00:24:35.090 "supported_io_types": { 00:24:35.090 "read": true, 00:24:35.090 "write": true, 00:24:35.090 "unmap": true, 00:24:35.090 "write_zeroes": true, 00:24:35.090 "flush": true, 00:24:35.090 "reset": true, 00:24:35.090 "compare": true, 00:24:35.090 "compare_and_write": false, 00:24:35.090 "abort": true, 00:24:35.090 "nvme_admin": true, 00:24:35.090 "nvme_io": true 00:24:35.090 }, 00:24:35.090 "driver_specific": { 00:24:35.090 "nvme": [ 00:24:35.090 { 00:24:35.090 "pci_address": "0000:00:07.0", 00:24:35.090 "trid": { 00:24:35.090 "trtype": "PCIe", 00:24:35.090 "traddr": "0000:00:07.0" 00:24:35.090 }, 00:24:35.090 "ctrlr_data": { 00:24:35.090 "cntlid": 0, 00:24:35.090 "vendor_id": "0x1b36", 00:24:35.090 "model_number": "QEMU NVMe Ctrl", 00:24:35.090 "serial_number": "12341", 00:24:35.090 "firmware_revision": "8.0.0", 00:24:35.090 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:35.090 "oacs": { 00:24:35.090 "security": 0, 00:24:35.090 "format": 1, 00:24:35.090 "firmware": 0, 00:24:35.090 "ns_manage": 1 00:24:35.090 }, 00:24:35.090 "multi_ctrlr": false, 00:24:35.090 "ana_reporting": false 00:24:35.090 }, 00:24:35.090 "vs": { 00:24:35.090 "nvme_version": "1.4" 00:24:35.090 }, 00:24:35.090 "ns_data": { 00:24:35.090 "id": 1, 00:24:35.090 "can_share": false 00:24:35.090 } 00:24:35.090 } 00:24:35.090 ], 00:24:35.090 "mp_policy": "active_passive" 00:24:35.090 } 00:24:35.090 } 00:24:35.090 ]' 00:24:35.090 14:25:36 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:24:35.090 14:25:36 -- common/autotest_common.sh@1372 -- # bs=4096 00:24:35.090 14:25:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:24:35.090 14:25:36 -- common/autotest_common.sh@1373 -- # nb=1310720 00:24:35.090 14:25:36 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:24:35.090 14:25:36 -- common/autotest_common.sh@1377 -- # echo 5120 00:24:35.090 14:25:36 -- ftl/common.sh@63 -- # base_size=5120 00:24:35.090 14:25:36 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:35.090 14:25:36 -- ftl/common.sh@67 -- # clear_lvols 00:24:35.090 14:25:36 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:35.090 14:25:36 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:35.352 14:25:36 -- ftl/common.sh@28 -- # stores=89f14f96-693f-4fdb-89b2-110489ae53a3 00:24:35.352 14:25:36 -- ftl/common.sh@29 -- # for lvs in $stores 00:24:35.352 14:25:36 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89f14f96-693f-4fdb-89b2-110489ae53a3 00:24:35.614 14:25:36 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:35.875 14:25:37 -- ftl/common.sh@68 -- # lvs=4c002460-e29a-42dc-a274-04d03b2a4a37 00:24:35.875 14:25:37 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 4c002460-e29a-42dc-a274-04d03b2a4a37 00:24:36.136 14:25:37 -- ftl/common.sh@107 -- # base_bdev=b4fa4102-5ceb-4fe3-97f6-e4249cff6710 00:24:36.136 14:25:37 -- ftl/common.sh@108 -- # [[ -z b4fa4102-5ceb-4fe3-97f6-e4249cff6710 ]] 00:24:36.136 14:25:37 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 b4fa4102-5ceb-4fe3-97f6-e4249cff6710 5120 00:24:36.136 14:25:37 -- ftl/common.sh@35 -- # local name=cache 00:24:36.136 14:25:37 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:24:36.136 14:25:37 -- ftl/common.sh@37 -- # local base_bdev=b4fa4102-5ceb-4fe3-97f6-e4249cff6710 00:24:36.136 14:25:37 -- ftl/common.sh@38 -- # local cache_size=5120 00:24:36.136 14:25:37 -- ftl/common.sh@41 -- # get_bdev_size b4fa4102-5ceb-4fe3-97f6-e4249cff6710 00:24:36.136 14:25:37 -- common/autotest_common.sh@1367 -- # local bdev_name=b4fa4102-5ceb-4fe3-97f6-e4249cff6710 00:24:36.136 14:25:37 -- common/autotest_common.sh@1368 -- # local bdev_info 00:24:36.136 14:25:37 -- common/autotest_common.sh@1369 -- # local bs 00:24:36.136 14:25:37 -- common/autotest_common.sh@1370 -- # local nb 00:24:36.136 14:25:37 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b4fa4102-5ceb-4fe3-97f6-e4249cff6710 00:24:36.136 14:25:37 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:24:36.136 { 00:24:36.136 "name": "b4fa4102-5ceb-4fe3-97f6-e4249cff6710", 00:24:36.136 "aliases": [ 00:24:36.136 "lvs/basen1p0" 00:24:36.136 ], 00:24:36.136 "product_name": "Logical Volume", 00:24:36.136 "block_size": 4096, 00:24:36.136 "num_blocks": 5242880, 00:24:36.136 "uuid": "b4fa4102-5ceb-4fe3-97f6-e4249cff6710", 00:24:36.136 "assigned_rate_limits": { 00:24:36.136 "rw_ios_per_sec": 0, 00:24:36.136 "rw_mbytes_per_sec": 0, 00:24:36.136 "r_mbytes_per_sec": 0, 00:24:36.136 "w_mbytes_per_sec": 0 00:24:36.136 }, 00:24:36.137 "claimed": false, 00:24:36.137 "zoned": false, 00:24:36.137 "supported_io_types": { 00:24:36.137 "read": true, 00:24:36.137 "write": true, 00:24:36.137 "unmap": true, 00:24:36.137 "write_zeroes": true, 00:24:36.137 "flush": false, 00:24:36.137 "reset": true, 00:24:36.137 "compare": false, 00:24:36.137 "compare_and_write": false, 00:24:36.137 "abort": false, 00:24:36.137 "nvme_admin": false, 00:24:36.137 "nvme_io": false 00:24:36.137 }, 00:24:36.137 "driver_specific": { 00:24:36.137 "lvol": { 00:24:36.137 "lvol_store_uuid": "4c002460-e29a-42dc-a274-04d03b2a4a37", 00:24:36.137 "base_bdev": "basen1", 00:24:36.137 "thin_provision": true, 00:24:36.137 "snapshot": false, 00:24:36.137 "clone": false, 00:24:36.137 "esnap_clone": false 00:24:36.137 } 00:24:36.137 } 00:24:36.137 } 00:24:36.137 ]' 00:24:36.137 14:25:37 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:24:36.137 14:25:37 -- common/autotest_common.sh@1372 -- # bs=4096 00:24:36.398 14:25:37 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:24:36.398 14:25:37 -- common/autotest_common.sh@1373 -- # nb=5242880 00:24:36.398 14:25:37 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:24:36.398 14:25:37 -- common/autotest_common.sh@1377 -- # echo 20480 00:24:36.398 14:25:37 -- ftl/common.sh@41 -- # local base_size=1024 00:24:36.398 14:25:37 -- ftl/common.sh@44 -- # local nvc_bdev 00:24:36.398 14:25:37 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:24:36.665 14:25:37 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:36.665 14:25:37 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:36.665 14:25:37 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:36.665 14:25:38 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:36.665 14:25:38 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:36.665 14:25:38 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b4fa4102-5ceb-4fe3-97f6-e4249cff6710 -c cachen1p0 --l2p_dram_limit 2 00:24:36.927 [2024-12-04 14:25:38.232407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.232447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:36.927 [2024-12-04 14:25:38.232459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:36.927 [2024-12-04 14:25:38.232467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.232509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.232516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:36.927 [2024-12-04 14:25:38.232524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:24:36.927 [2024-12-04 14:25:38.232530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.232546] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:36.927 [2024-12-04 14:25:38.233256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:36.927 [2024-12-04 14:25:38.233300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.233316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:36.927 [2024-12-04 14:25:38.233335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:24:36.927 [2024-12-04 14:25:38.233350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.233455] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9ce649b4-bcad-4cec-9c2b-adcbb00d6b24 00:24:36.927 [2024-12-04 14:25:38.234411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.234511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:36.927 [2024-12-04 14:25:38.234560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:24:36.927 [2024-12-04 14:25:38.234579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.239187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.239285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:36.927 [2024-12-04 14:25:38.239332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.563 ms 00:24:36.927 [2024-12-04 14:25:38.239351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.239390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.239462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:36.927 [2024-12-04 14:25:38.239513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:36.927 [2024-12-04 14:25:38.239531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.239575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.239598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:36.927 [2024-12-04 14:25:38.239612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:36.927 [2024-12-04 14:25:38.239628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.239699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:36.927 [2024-12-04 14:25:38.242661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.242753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:36.927 [2024-12-04 14:25:38.242832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.963 ms 00:24:36.927 [2024-12-04 14:25:38.242841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.242865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.242872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:36.927 [2024-12-04 14:25:38.242880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:36.927 [2024-12-04 14:25:38.242885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.927 [2024-12-04 14:25:38.242904] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:36.927 [2024-12-04 14:25:38.242993] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:24:36.927 [2024-12-04 14:25:38.243006] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:36.927 [2024-12-04 14:25:38.243014] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:24:36.927 [2024-12-04 14:25:38.243024] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:36.927 [2024-12-04 14:25:38.243030] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:36.927 [2024-12-04 14:25:38.243039] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:36.927 [2024-12-04 14:25:38.243045] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:36.927 [2024-12-04 14:25:38.243052] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:24:36.927 [2024-12-04 14:25:38.243058] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:24:36.927 [2024-12-04 14:25:38.243065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.927 [2024-12-04 14:25:38.243076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:36.927 [2024-12-04 14:25:38.243084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:24:36.928 [2024-12-04 14:25:38.243106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.243155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.243161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:36.928 [2024-12-04 14:25:38.243168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:24:36.928 [2024-12-04 14:25:38.243175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.243233] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:36.928 [2024-12-04 14:25:38.243240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:36.928 [2024-12-04 14:25:38.243247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:36.928 [2024-12-04 14:25:38.243265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:36.928 [2024-12-04 14:25:38.243277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:36.928 [2024-12-04 14:25:38.243284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:36.928 [2024-12-04 14:25:38.243288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:36.928 [2024-12-04 14:25:38.243300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:36.928 [2024-12-04 14:25:38.243307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:36.928 [2024-12-04 14:25:38.243319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:36.928 [2024-12-04 14:25:38.243339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:24:36.928 [2024-12-04 14:25:38.243345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:24:36.928 [2024-12-04 14:25:38.243356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:24:36.928 [2024-12-04 14:25:38.243362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:36.928 [2024-12-04 14:25:38.243373] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:36.928 [2024-12-04 14:25:38.243379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:36.928 [2024-12-04 14:25:38.243390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:24:36.928 [2024-12-04 14:25:38.243395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:36.928 [2024-12-04 14:25:38.243406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:36.928 [2024-12-04 14:25:38.243412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:36.928 [2024-12-04 14:25:38.243424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:24:36.928 [2024-12-04 14:25:38.243429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:36.928 [2024-12-04 14:25:38.243440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:36.928 [2024-12-04 14:25:38.243445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:36.928 [2024-12-04 14:25:38.243457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243467] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:36.928 [2024-12-04 14:25:38.243473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:36.928 [2024-12-04 14:25:38.243480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:36.928 [2024-12-04 14:25:38.243493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:36.928 [2024-12-04 14:25:38.243498] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:36.928 [2024-12-04 14:25:38.243504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:36.928 [2024-12-04 14:25:38.243511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:36.928 [2024-12-04 14:25:38.243518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:36.928 [2024-12-04 14:25:38.243523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:36.928 [2024-12-04 14:25:38.243530] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:36.928 [2024-12-04 14:25:38.243538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:36.928 [2024-12-04 14:25:38.243551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243564] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:24:36.928 [2024-12-04 14:25:38.243570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:24:36.928 [2024-12-04 14:25:38.243575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:24:36.928 [2024-12-04 14:25:38.243582] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:24:36.928 [2024-12-04 14:25:38.243587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:24:36.928 [2024-12-04 14:25:38.243621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:24:36.928 [2024-12-04 14:25:38.243626] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:36.928 [2024-12-04 14:25:38.243633] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243639] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:36.928 [2024-12-04 14:25:38.243650] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:36.928 [2024-12-04 14:25:38.243656] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:36.928 [2024-12-04 14:25:38.243662] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:36.928 [2024-12-04 14:25:38.243668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.243674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:36.928 [2024-12-04 14:25:38.243680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:24:36.928 [2024-12-04 14:25:38.243687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.255485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.255580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:36.928 [2024-12-04 14:25:38.255623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.758 ms 00:24:36.928 [2024-12-04 14:25:38.255643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.255681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.255732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:36.928 [2024-12-04 14:25:38.255752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:24:36.928 [2024-12-04 14:25:38.255768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.279605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.279698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:36.928 [2024-12-04 14:25:38.279739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.778 ms 00:24:36.928 [2024-12-04 14:25:38.279759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.279792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.279809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:36.928 [2024-12-04 14:25:38.279824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:36.928 [2024-12-04 14:25:38.279840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.280161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.280201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:36.928 [2024-12-04 14:25:38.280218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:24:36.928 [2024-12-04 14:25:38.280234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.928 [2024-12-04 14:25:38.280277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.928 [2024-12-04 14:25:38.280439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:36.929 [2024-12-04 14:25:38.280455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:36.929 [2024-12-04 14:25:38.280470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.929 [2024-12-04 14:25:38.292401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.929 [2024-12-04 14:25:38.292486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:36.929 [2024-12-04 14:25:38.292528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.876 ms 00:24:36.929 [2024-12-04 14:25:38.292547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.929 [2024-12-04 14:25:38.301482] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:36.929 [2024-12-04 14:25:38.302241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.929 [2024-12-04 14:25:38.302316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:36.929 [2024-12-04 14:25:38.302356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.627 ms 00:24:36.929 [2024-12-04 14:25:38.302373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.929 [2024-12-04 14:25:38.324514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.929 [2024-12-04 14:25:38.324609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:36.929 [2024-12-04 14:25:38.324662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.097 ms 00:24:36.929 [2024-12-04 14:25:38.324680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.929 [2024-12-04 14:25:38.324730] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:24:36.929 [2024-12-04 14:25:38.324781] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:24:40.230 [2024-12-04 14:25:41.218129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.218338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:40.230 [2024-12-04 14:25:41.218420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2893.381 ms 00:24:40.230 [2024-12-04 14:25:41.218445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.218545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.218610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:40.230 [2024-12-04 14:25:41.218640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:24:40.230 [2024-12-04 14:25:41.218659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.242458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.242569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:40.230 [2024-12-04 14:25:41.242626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.700 ms 00:24:40.230 [2024-12-04 14:25:41.242649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.265946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.266047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:40.230 [2024-12-04 14:25:41.266121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.252 ms 00:24:40.230 [2024-12-04 14:25:41.266145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.266468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.266508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:40.230 [2024-12-04 14:25:41.266646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:24:40.230 [2024-12-04 14:25:41.266665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.329636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.329746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:40.230 [2024-12-04 14:25:41.329801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 62.925 ms 00:24:40.230 [2024-12-04 14:25:41.329823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.354231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.354338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:40.230 [2024-12-04 14:25:41.354389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.364 ms 00:24:40.230 [2024-12-04 14:25:41.354426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.355922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.230 [2024-12-04 14:25:41.356044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:24:40.230 [2024-12-04 14:25:41.356066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:24:40.230 [2024-12-04 14:25:41.356074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.230 [2024-12-04 14:25:41.380122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.231 [2024-12-04 14:25:41.380153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:40.231 [2024-12-04 14:25:41.380165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.990 ms 00:24:40.231 [2024-12-04 14:25:41.380172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.231 [2024-12-04 14:25:41.380211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.231 [2024-12-04 14:25:41.380219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:40.231 [2024-12-04 14:25:41.380230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:40.231 [2024-12-04 14:25:41.380236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.231 [2024-12-04 14:25:41.380312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:40.231 [2024-12-04 14:25:41.380322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:40.231 [2024-12-04 14:25:41.380331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:24:40.231 [2024-12-04 14:25:41.380338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:40.231 [2024-12-04 14:25:41.381179] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3148.333 ms, result 0 00:24:40.231 { 00:24:40.231 "name": "ftl", 00:24:40.231 "uuid": "9ce649b4-bcad-4cec-9c2b-adcbb00d6b24" 00:24:40.231 } 00:24:40.231 14:25:41 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:40.231 [2024-12-04 14:25:41.580616] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.231 14:25:41 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:40.492 14:25:41 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:40.752 [2024-12-04 14:25:41.969038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:24:40.752 14:25:41 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:40.752 [2024-12-04 14:25:42.165972] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:40.752 14:25:42 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:41.013 Fill FTL, iteration 1 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:41.013 14:25:42 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:41.013 14:25:42 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:41.013 14:25:42 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:41.013 14:25:42 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:41.013 14:25:42 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:41.013 14:25:42 -- ftl/common.sh@163 -- # spdk_ini_pid=78015 00:24:41.013 14:25:42 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:41.013 14:25:42 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:41.013 14:25:42 -- ftl/common.sh@165 -- # waitforlisten 78015 /var/tmp/spdk.tgt.sock 00:24:41.013 14:25:42 -- common/autotest_common.sh@829 -- # '[' -z 78015 ']' 00:24:41.013 14:25:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:41.275 14:25:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.275 14:25:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:41.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:41.275 14:25:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.275 14:25:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.275 [2024-12-04 14:25:42.539289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:41.275 [2024-12-04 14:25:42.539553] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78015 ] 00:24:41.275 [2024-12-04 14:25:42.687544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.536 [2024-12-04 14:25:42.867498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:41.536 [2024-12-04 14:25:42.867797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.923 14:25:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.923 14:25:44 -- common/autotest_common.sh@862 -- # return 0 00:24:42.923 14:25:44 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:42.923 ftln1 00:24:42.923 14:25:44 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:42.923 14:25:44 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:43.184 14:25:44 -- ftl/common.sh@173 -- # echo ']}' 00:24:43.184 14:25:44 -- ftl/common.sh@176 -- # killprocess 78015 00:24:43.184 14:25:44 -- common/autotest_common.sh@936 -- # '[' -z 78015 ']' 00:24:43.184 14:25:44 -- common/autotest_common.sh@940 -- # kill -0 78015 00:24:43.184 14:25:44 -- common/autotest_common.sh@941 -- # uname 00:24:43.184 14:25:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.184 14:25:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78015 00:24:43.184 killing process with pid 78015 00:24:43.184 14:25:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:43.184 14:25:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:43.184 14:25:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78015' 00:24:43.184 14:25:44 -- common/autotest_common.sh@955 -- # kill 78015 00:24:43.184 14:25:44 -- common/autotest_common.sh@960 -- # wait 78015 00:24:44.602 14:25:45 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:44.602 14:25:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:44.602 [2024-12-04 14:25:45.775312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:44.602 [2024-12-04 14:25:45.775417] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78064 ] 00:24:44.602 [2024-12-04 14:25:45.921162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.859 [2024-12-04 14:25:46.075276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.244  [2024-12-04T14:25:48.645Z] Copying: 260/1024 [MB] (260 MBps) [2024-12-04T14:25:49.583Z] Copying: 527/1024 [MB] (267 MBps) [2024-12-04T14:25:50.519Z] Copying: 794/1024 [MB] (267 MBps) [2024-12-04T14:25:51.087Z] Copying: 1024/1024 [MB] (average 266 MBps) 00:24:49.622 00:24:49.622 14:25:50 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:49.622 Calculate MD5 checksum, iteration 1 00:24:49.622 14:25:50 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:49.622 14:25:50 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:49.622 14:25:50 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:49.622 14:25:50 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:49.622 14:25:50 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:49.622 14:25:50 -- ftl/common.sh@154 -- # return 0 00:24:49.622 14:25:50 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:49.622 [2024-12-04 14:25:50.936100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:49.622 [2024-12-04 14:25:50.936646] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78121 ] 00:24:49.622 [2024-12-04 14:25:51.083753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.881 [2024-12-04 14:25:51.227901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.257  [2024-12-04T14:25:53.291Z] Copying: 701/1024 [MB] (701 MBps) [2024-12-04T14:25:53.860Z] Copying: 1024/1024 [MB] (average 697 MBps) 00:24:52.395 00:24:52.395 14:25:53 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:52.395 14:25:53 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:54.324 14:25:55 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:54.324 14:25:55 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cd3cf335b036c61fbc5fa4e2ce08f8d9 00:24:54.324 14:25:55 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:54.324 14:25:55 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:54.324 14:25:55 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:54.324 Fill FTL, iteration 2 00:24:54.324 14:25:55 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:54.324 14:25:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:54.324 14:25:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:54.324 14:25:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:54.324 14:25:55 -- ftl/common.sh@154 -- # return 0 00:24:54.324 14:25:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:54.324 [2024-12-04 14:25:55.715302] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:54.324 [2024-12-04 14:25:55.715404] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78180 ] 00:24:54.581 [2024-12-04 14:25:55.860826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.581 [2024-12-04 14:25:56.037938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.958  [2024-12-04T14:25:58.794Z] Copying: 207/1024 [MB] (207 MBps) [2024-12-04T14:25:59.730Z] Copying: 475/1024 [MB] (268 MBps) [2024-12-04T14:26:00.664Z] Copying: 737/1024 [MB] (262 MBps) [2024-12-04T14:26:00.665Z] Copying: 1000/1024 [MB] (263 MBps) [2024-12-04T14:26:01.232Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:24:59.767 00:24:59.767 14:26:01 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:24:59.767 Calculate MD5 checksum, iteration 2 00:24:59.767 14:26:01 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:24:59.767 14:26:01 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:59.767 14:26:01 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:59.767 14:26:01 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:59.767 14:26:01 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:59.767 14:26:01 -- ftl/common.sh@154 -- # return 0 00:24:59.767 14:26:01 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:59.767 [2024-12-04 14:26:01.177937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:59.767 [2024-12-04 14:26:01.178206] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78233 ] 00:25:00.025 [2024-12-04 14:26:01.321919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.025 [2024-12-04 14:26:01.462676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.549  [2024-12-04T14:26:04.275Z] Copying: 702/1024 [MB] (702 MBps) [2024-12-04T14:26:05.213Z] Copying: 1024/1024 [MB] (average 680 MBps) 00:25:03.748 00:25:03.748 14:26:05 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:03.748 14:26:05 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:06.366 14:26:07 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:06.366 14:26:07 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e989ad3e27f9f4b8aa999b6c2709f9d1 00:25:06.366 14:26:07 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:06.366 14:26:07 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:06.366 14:26:07 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:06.366 [2024-12-04 14:26:07.407649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.366 [2024-12-04 14:26:07.407688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:06.366 [2024-12-04 14:26:07.407700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:06.366 [2024-12-04 14:26:07.407708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.366 [2024-12-04 14:26:07.407728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.366 [2024-12-04 14:26:07.407735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:06.366 [2024-12-04 14:26:07.407742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:06.366 [2024-12-04 14:26:07.407748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.366 [2024-12-04 14:26:07.407764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.366 [2024-12-04 14:26:07.407770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:06.366 [2024-12-04 14:26:07.407781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:06.366 [2024-12-04 14:26:07.407787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.366 [2024-12-04 14:26:07.407837] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.177 ms, result 0 00:25:06.366 true 00:25:06.366 14:26:07 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:06.366 { 00:25:06.366 "name": "ftl", 00:25:06.366 "properties": [ 00:25:06.366 { 00:25:06.366 "name": "superblock_version", 00:25:06.366 "value": 5, 00:25:06.366 "read-only": true 00:25:06.366 }, 00:25:06.366 { 00:25:06.366 "name": "base_device", 00:25:06.366 "bands": [ 00:25:06.366 { 00:25:06.366 "id": 0, 00:25:06.366 "state": "FREE", 00:25:06.366 "validity": 0.0 00:25:06.366 }, 00:25:06.366 { 00:25:06.366 "id": 1, 00:25:06.366 "state": "FREE", 00:25:06.366 "validity": 0.0 00:25:06.366 }, 00:25:06.366 { 00:25:06.366 "id": 2, 00:25:06.366 "state": "FREE", 00:25:06.366 "validity": 0.0 00:25:06.366 }, 00:25:06.366 { 00:25:06.366 "id": 3, 00:25:06.366 "state": "FREE", 00:25:06.366 "validity": 0.0 00:25:06.366 }, 00:25:06.366 { 00:25:06.366 "id": 4, 00:25:06.366 "state": "FREE", 00:25:06.366 "validity": 0.0 00:25:06.366 }, 00:25:06.366 { 00:25:06.367 "id": 5, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 6, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 7, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 8, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 9, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 10, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 11, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 12, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 13, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 14, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 15, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 16, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 17, 00:25:06.367 "state": "FREE", 00:25:06.367 "validity": 0.0 00:25:06.367 } 00:25:06.367 ], 00:25:06.367 "read-only": true 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "name": "cache_device", 00:25:06.367 "type": "bdev", 00:25:06.367 "chunks": [ 00:25:06.367 { 00:25:06.367 "id": 0, 00:25:06.367 "state": "CLOSED", 00:25:06.367 "utilization": 1.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 1, 00:25:06.367 "state": "CLOSED", 00:25:06.367 "utilization": 1.0 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 2, 00:25:06.367 "state": "OPEN", 00:25:06.367 "utilization": 0.001953125 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "id": 3, 00:25:06.367 "state": "OPEN", 00:25:06.367 "utilization": 0.0 00:25:06.367 } 00:25:06.367 ], 00:25:06.367 "read-only": true 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "name": "verbose_mode", 00:25:06.367 "value": true, 00:25:06.367 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:06.367 }, 00:25:06.367 { 00:25:06.367 "name": "prep_upgrade_on_shutdown", 00:25:06.367 "value": false, 00:25:06.367 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:06.367 } 00:25:06.367 ] 00:25:06.367 } 00:25:06.367 14:26:07 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:06.367 [2024-12-04 14:26:07.775989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.367 [2024-12-04 14:26:07.776028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:06.367 [2024-12-04 14:26:07.776038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:06.367 [2024-12-04 14:26:07.776045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.367 [2024-12-04 14:26:07.776065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.367 [2024-12-04 14:26:07.776072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:06.367 [2024-12-04 14:26:07.776079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:06.367 [2024-12-04 14:26:07.776098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.367 [2024-12-04 14:26:07.776115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.367 [2024-12-04 14:26:07.776123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:06.367 [2024-12-04 14:26:07.776130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:06.367 [2024-12-04 14:26:07.776136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.367 [2024-12-04 14:26:07.776182] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:25:06.367 true 00:25:06.367 14:26:07 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:06.367 14:26:07 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:06.367 14:26:07 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:06.626 14:26:07 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:06.626 14:26:07 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:06.626 14:26:07 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:06.884 [2024-12-04 14:26:08.171554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.884 [2024-12-04 14:26:08.171592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:06.884 [2024-12-04 14:26:08.171602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:06.884 [2024-12-04 14:26:08.171608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.884 [2024-12-04 14:26:08.171625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.884 [2024-12-04 14:26:08.171631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:06.884 [2024-12-04 14:26:08.171637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:06.884 [2024-12-04 14:26:08.171642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.884 [2024-12-04 14:26:08.171657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:06.884 [2024-12-04 14:26:08.171663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:06.884 [2024-12-04 14:26:08.171669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:06.884 [2024-12-04 14:26:08.171674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:06.884 [2024-12-04 14:26:08.171717] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:25:06.884 true 00:25:06.884 14:26:08 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:07.143 { 00:25:07.143 "name": "ftl", 00:25:07.143 "properties": [ 00:25:07.143 { 00:25:07.143 "name": "superblock_version", 00:25:07.143 "value": 5, 00:25:07.143 "read-only": true 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "name": "base_device", 00:25:07.143 "bands": [ 00:25:07.143 { 00:25:07.143 "id": 0, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 1, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 2, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 3, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 4, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 5, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 6, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.143 "id": 7, 00:25:07.143 "state": "FREE", 00:25:07.143 "validity": 0.0 00:25:07.143 }, 00:25:07.143 { 00:25:07.144 "id": 8, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 9, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 10, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 11, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 12, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 13, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 14, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 15, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 16, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 17, 00:25:07.144 "state": "FREE", 00:25:07.144 "validity": 0.0 00:25:07.144 } 00:25:07.144 ], 00:25:07.144 "read-only": true 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "name": "cache_device", 00:25:07.144 "type": "bdev", 00:25:07.144 "chunks": [ 00:25:07.144 { 00:25:07.144 "id": 0, 00:25:07.144 "state": "CLOSED", 00:25:07.144 "utilization": 1.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 1, 00:25:07.144 "state": "CLOSED", 00:25:07.144 "utilization": 1.0 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 2, 00:25:07.144 "state": "OPEN", 00:25:07.144 "utilization": 0.001953125 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "id": 3, 00:25:07.144 "state": "OPEN", 00:25:07.144 "utilization": 0.0 00:25:07.144 } 00:25:07.144 ], 00:25:07.144 "read-only": true 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "name": "verbose_mode", 00:25:07.144 "value": true, 00:25:07.144 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:07.144 }, 00:25:07.144 { 00:25:07.144 "name": "prep_upgrade_on_shutdown", 00:25:07.144 "value": true, 00:25:07.144 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:07.144 } 00:25:07.144 ] 00:25:07.144 } 00:25:07.144 14:26:08 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:07.144 14:26:08 -- ftl/common.sh@130 -- # [[ -n 77890 ]] 00:25:07.144 14:26:08 -- ftl/common.sh@131 -- # killprocess 77890 00:25:07.144 14:26:08 -- common/autotest_common.sh@936 -- # '[' -z 77890 ']' 00:25:07.144 14:26:08 -- common/autotest_common.sh@940 -- # kill -0 77890 00:25:07.144 14:26:08 -- common/autotest_common.sh@941 -- # uname 00:25:07.144 14:26:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:07.144 14:26:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77890 00:25:07.144 14:26:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:07.144 14:26:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:07.144 14:26:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77890' 00:25:07.144 killing process with pid 77890 00:25:07.144 14:26:08 -- common/autotest_common.sh@955 -- # kill 77890 00:25:07.144 14:26:08 -- common/autotest_common.sh@960 -- # wait 77890 00:25:07.713 [2024-12-04 14:26:08.942702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:25:07.714 [2024-12-04 14:26:08.954361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.714 [2024-12-04 14:26:08.954394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:07.714 [2024-12-04 14:26:08.954405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:07.714 [2024-12-04 14:26:08.954419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.714 [2024-12-04 14:26:08.954435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:07.714 [2024-12-04 14:26:08.956519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.714 [2024-12-04 14:26:08.956542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:07.714 [2024-12-04 14:26:08.956550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.074 ms 00:25:07.714 [2024-12-04 14:26:08.956557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.908132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.908181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:17.712 [2024-12-04 14:26:17.908192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8951.521 ms 00:25:17.712 [2024-12-04 14:26:17.908199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.909171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.909185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:17.712 [2024-12-04 14:26:17.909192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.957 ms 00:25:17.712 [2024-12-04 14:26:17.909199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.910040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.910060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:25:17.712 [2024-12-04 14:26:17.910067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:25:17.712 [2024-12-04 14:26:17.910073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.917853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.917879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:17.712 [2024-12-04 14:26:17.917886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.738 ms 00:25:17.712 [2024-12-04 14:26:17.917892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.922952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.922977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:17.712 [2024-12-04 14:26:17.922986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.036 ms 00:25:17.712 [2024-12-04 14:26:17.922992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.923046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.923053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:17.712 [2024-12-04 14:26:17.923059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:25:17.712 [2024-12-04 14:26:17.923068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.930184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.930207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:17.712 [2024-12-04 14:26:17.930214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.105 ms 00:25:17.712 [2024-12-04 14:26:17.930219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.937542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.937564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:17.712 [2024-12-04 14:26:17.937570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.301 ms 00:25:17.712 [2024-12-04 14:26:17.937575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.944542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.944564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:17.712 [2024-12-04 14:26:17.944571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.945 ms 00:25:17.712 [2024-12-04 14:26:17.944576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.951602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.712 [2024-12-04 14:26:17.951710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:17.712 [2024-12-04 14:26:17.951721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.976 ms 00:25:17.712 [2024-12-04 14:26:17.951727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.712 [2024-12-04 14:26:17.951749] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:17.712 [2024-12-04 14:26:17.951759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:17.712 [2024-12-04 14:26:17.951767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:17.712 [2024-12-04 14:26:17.951773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:17.712 [2024-12-04 14:26:17.951779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:17.713 [2024-12-04 14:26:17.951871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:17.713 [2024-12-04 14:26:17.951877] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9ce649b4-bcad-4cec-9c2b-adcbb00d6b24 00:25:17.713 [2024-12-04 14:26:17.951883] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:17.713 [2024-12-04 14:26:17.951888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:17.713 [2024-12-04 14:26:17.951893] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:17.713 [2024-12-04 14:26:17.951899] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:17.713 [2024-12-04 14:26:17.951904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:17.713 [2024-12-04 14:26:17.951910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:17.713 [2024-12-04 14:26:17.951917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:17.713 [2024-12-04 14:26:17.951923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:17.713 [2024-12-04 14:26:17.951928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:17.713 [2024-12-04 14:26:17.951934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.713 [2024-12-04 14:26:17.951939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:17.713 [2024-12-04 14:26:17.951945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:25:17.713 [2024-12-04 14:26:17.951951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:17.961673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.713 [2024-12-04 14:26:17.961696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:17.713 [2024-12-04 14:26:17.961705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.711 ms 00:25:17.713 [2024-12-04 14:26:17.961711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:17.961863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:17.713 [2024-12-04 14:26:17.961869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:17.713 [2024-12-04 14:26:17.961876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.134 ms 00:25:17.713 [2024-12-04 14:26:17.961881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:17.997354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:17.997381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:17.713 [2024-12-04 14:26:17.997389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:17.997399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:17.997423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:17.997429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:17.713 [2024-12-04 14:26:17.997435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:17.997441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:17.997486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:17.997493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:17.713 [2024-12-04 14:26:17.997500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:17.997505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:17.997519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:17.997525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:17.713 [2024-12-04 14:26:17.997530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:17.997536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.056516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.056551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:17.713 [2024-12-04 14:26:18.056560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.056568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:17.713 [2024-12-04 14:26:18.079180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.079185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:17.713 [2024-12-04 14:26:18.079238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.079244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:17.713 [2024-12-04 14:26:18.079289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.079295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:17.713 [2024-12-04 14:26:18.079373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.079379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:17.713 [2024-12-04 14:26:18.079414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.079420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:17.713 [2024-12-04 14:26:18.079461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.713 [2024-12-04 14:26:18.079467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.713 [2024-12-04 14:26:18.079500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:17.713 [2024-12-04 14:26:18.079509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:17.713 [2024-12-04 14:26:18.079515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:17.714 [2024-12-04 14:26:18.079520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:17.714 [2024-12-04 14:26:18.079608] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9125.194 ms, result 0 00:25:18.652 14:26:20 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:18.652 14:26:20 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:18.652 14:26:20 -- ftl/common.sh@81 -- # local base_bdev= 00:25:18.652 14:26:20 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:18.652 14:26:20 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:18.652 14:26:20 -- ftl/common.sh@89 -- # spdk_tgt_pid=78456 00:25:18.652 14:26:20 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:18.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.652 14:26:20 -- ftl/common.sh@91 -- # waitforlisten 78456 00:25:18.652 14:26:20 -- common/autotest_common.sh@829 -- # '[' -z 78456 ']' 00:25:18.652 14:26:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.652 14:26:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.652 14:26:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.652 14:26:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.652 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:18.652 14:26:20 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:18.652 [2024-12-04 14:26:20.087341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.652 [2024-12-04 14:26:20.087458] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78456 ] 00:25:18.912 [2024-12-04 14:26:20.236551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.171 [2024-12-04 14:26:20.387600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:19.171 [2024-12-04 14:26:20.387754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.739 [2024-12-04 14:26:20.914914] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:19.739 [2024-12-04 14:26:20.914966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:19.739 [2024-12-04 14:26:21.051161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.739 [2024-12-04 14:26:21.051196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:19.739 [2024-12-04 14:26:21.051206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:19.739 [2024-12-04 14:26:21.051212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.739 [2024-12-04 14:26:21.051250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.739 [2024-12-04 14:26:21.051259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:19.739 [2024-12-04 14:26:21.051265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:25:19.739 [2024-12-04 14:26:21.051271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.051285] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:19.740 [2024-12-04 14:26:21.051823] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:19.740 [2024-12-04 14:26:21.051835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.051840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:19.740 [2024-12-04 14:26:21.051847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:25:19.740 [2024-12-04 14:26:21.051852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.052813] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:19.740 [2024-12-04 14:26:21.062402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.062434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:19.740 [2024-12-04 14:26:21.062443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.591 ms 00:25:19.740 [2024-12-04 14:26:21.062449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.062493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.062501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:19.740 [2024-12-04 14:26:21.062507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:19.740 [2024-12-04 14:26:21.062512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.066760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.066785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:19.740 [2024-12-04 14:26:21.066793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.201 ms 00:25:19.740 [2024-12-04 14:26:21.066801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.066830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.066837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:19.740 [2024-12-04 14:26:21.066843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:19.740 [2024-12-04 14:26:21.066848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.066882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.066889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:19.740 [2024-12-04 14:26:21.066895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:19.740 [2024-12-04 14:26:21.066901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.066922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:19.740 [2024-12-04 14:26:21.069495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.069598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:19.740 [2024-12-04 14:26:21.069613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.580 ms 00:25:19.740 [2024-12-04 14:26:21.069619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.069642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.069649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:19.740 [2024-12-04 14:26:21.069655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:19.740 [2024-12-04 14:26:21.069660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.069677] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:19.740 [2024-12-04 14:26:21.069691] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:25:19.740 [2024-12-04 14:26:21.069715] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:19.740 [2024-12-04 14:26:21.069728] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:25:19.740 [2024-12-04 14:26:21.069784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:19.740 [2024-12-04 14:26:21.069792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:19.740 [2024-12-04 14:26:21.069800] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:19.740 [2024-12-04 14:26:21.069807] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:19.740 [2024-12-04 14:26:21.069814] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:19.740 [2024-12-04 14:26:21.069820] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:19.740 [2024-12-04 14:26:21.069827] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:19.740 [2024-12-04 14:26:21.069833] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:19.740 [2024-12-04 14:26:21.069840] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:19.740 [2024-12-04 14:26:21.069846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.069851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:19.740 [2024-12-04 14:26:21.069857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:25:19.740 [2024-12-04 14:26:21.069863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.069910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.740 [2024-12-04 14:26:21.069917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:19.740 [2024-12-04 14:26:21.069922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:25:19.740 [2024-12-04 14:26:21.069928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.740 [2024-12-04 14:26:21.069985] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:19.740 [2024-12-04 14:26:21.069992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:19.740 [2024-12-04 14:26:21.069998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:19.740 [2024-12-04 14:26:21.070004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:19.740 [2024-12-04 14:26:21.070015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:19.740 [2024-12-04 14:26:21.070026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:19.740 [2024-12-04 14:26:21.070031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:19.740 [2024-12-04 14:26:21.070036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:19.740 [2024-12-04 14:26:21.070047] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:19.740 [2024-12-04 14:26:21.070052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070057] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:19.740 [2024-12-04 14:26:21.070063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:19.740 [2024-12-04 14:26:21.070078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:19.740 [2024-12-04 14:26:21.070082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.740 [2024-12-04 14:26:21.070102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:19.740 [2024-12-04 14:26:21.070107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:19.740 [2024-12-04 14:26:21.070112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:19.740 [2024-12-04 14:26:21.070118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:19.741 [2024-12-04 14:26:21.070123] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:19.741 [2024-12-04 14:26:21.070128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:19.741 [2024-12-04 14:26:21.070138] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:19.741 [2024-12-04 14:26:21.070143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070148] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:19.741 [2024-12-04 14:26:21.070153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:19.741 [2024-12-04 14:26:21.070158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:19.741 [2024-12-04 14:26:21.070167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:19.741 [2024-12-04 14:26:21.070174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:19.741 [2024-12-04 14:26:21.070184] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:19.741 [2024-12-04 14:26:21.070189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.741 [2024-12-04 14:26:21.070195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:19.741 [2024-12-04 14:26:21.070199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.741 [2024-12-04 14:26:21.070208] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:19.741 [2024-12-04 14:26:21.070214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:19.741 [2024-12-04 14:26:21.070219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:19.741 [2024-12-04 14:26:21.070230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:19.741 [2024-12-04 14:26:21.070239] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:19.741 [2024-12-04 14:26:21.070244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:19.741 [2024-12-04 14:26:21.070249] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:19.741 [2024-12-04 14:26:21.070254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:19.741 [2024-12-04 14:26:21.070259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:19.741 [2024-12-04 14:26:21.070265] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:19.741 [2024-12-04 14:26:21.070272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:19.741 [2024-12-04 14:26:21.070286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070297] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:19.741 [2024-12-04 14:26:21.070304] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:19.741 [2024-12-04 14:26:21.070314] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:19.741 [2024-12-04 14:26:21.070320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:19.741 [2024-12-04 14:26:21.070325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070341] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:19.741 [2024-12-04 14:26:21.070352] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:19.741 [2024-12-04 14:26:21.070357] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:19.741 [2024-12-04 14:26:21.070363] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070369] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:19.741 [2024-12-04 14:26:21.070375] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:19.741 [2024-12-04 14:26:21.070380] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:19.741 [2024-12-04 14:26:21.070385] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:19.741 [2024-12-04 14:26:21.070391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.070396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:19.741 [2024-12-04 14:26:21.070402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.438 ms 00:25:19.741 [2024-12-04 14:26:21.070407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.081869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.081895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:19.741 [2024-12-04 14:26:21.081902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.416 ms 00:25:19.741 [2024-12-04 14:26:21.081908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.081935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.081941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:19.741 [2024-12-04 14:26:21.081947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:19.741 [2024-12-04 14:26:21.081955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.105715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.105743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:19.741 [2024-12-04 14:26:21.105751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.725 ms 00:25:19.741 [2024-12-04 14:26:21.105758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.105778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.105785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:19.741 [2024-12-04 14:26:21.105792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:19.741 [2024-12-04 14:26:21.105799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.106116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.106130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:19.741 [2024-12-04 14:26:21.106137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:25:19.741 [2024-12-04 14:26:21.106142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.106172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.106178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:19.741 [2024-12-04 14:26:21.106183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:19.741 [2024-12-04 14:26:21.106189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.117903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.741 [2024-12-04 14:26:21.117929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:19.741 [2024-12-04 14:26:21.117936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.698 ms 00:25:19.741 [2024-12-04 14:26:21.117942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.741 [2024-12-04 14:26:21.127583] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:19.741 [2024-12-04 14:26:21.127621] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:19.741 [2024-12-04 14:26:21.127629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.742 [2024-12-04 14:26:21.127636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:25:19.742 [2024-12-04 14:26:21.127643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.612 ms 00:25:19.742 [2024-12-04 14:26:21.127653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.742 [2024-12-04 14:26:21.138137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.742 [2024-12-04 14:26:21.138163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:25:19.742 [2024-12-04 14:26:21.138171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.453 ms 00:25:19.742 [2024-12-04 14:26:21.138178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.742 [2024-12-04 14:26:21.146769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.742 [2024-12-04 14:26:21.146795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:25:19.742 [2024-12-04 14:26:21.146802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.565 ms 00:25:19.742 [2024-12-04 14:26:21.146808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.742 [2024-12-04 14:26:21.155562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.742 [2024-12-04 14:26:21.155586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:25:19.742 [2024-12-04 14:26:21.155593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.727 ms 00:25:19.742 [2024-12-04 14:26:21.155598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.742 [2024-12-04 14:26:21.155874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.742 [2024-12-04 14:26:21.155888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:19.742 [2024-12-04 14:26:21.155895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:25:19.742 [2024-12-04 14:26:21.155901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:19.742 [2024-12-04 14:26:21.201439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:19.742 [2024-12-04 14:26:21.201469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:19.742 [2024-12-04 14:26:21.201477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.524 ms 00:25:19.742 [2024-12-04 14:26:21.201484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.209379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:20.001 [2024-12-04 14:26:21.209914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.209937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:20.001 [2024-12-04 14:26:21.209947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.386 ms 00:25:20.001 [2024-12-04 14:26:21.209953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.209998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.210006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:25:20.001 [2024-12-04 14:26:21.210013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:20.001 [2024-12-04 14:26:21.210018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.210049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.210056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:20.001 [2024-12-04 14:26:21.210062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:20.001 [2024-12-04 14:26:21.210069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.211015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.211041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:20.001 [2024-12-04 14:26:21.211048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.932 ms 00:25:20.001 [2024-12-04 14:26:21.211054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.211073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.211079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:20.001 [2024-12-04 14:26:21.211097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:20.001 [2024-12-04 14:26:21.211103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.211132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:20.001 [2024-12-04 14:26:21.211141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.211147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:20.001 [2024-12-04 14:26:21.211152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:20.001 [2024-12-04 14:26:21.211157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.228767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.228792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:20.001 [2024-12-04 14:26:21.228800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.595 ms 00:25:20.001 [2024-12-04 14:26:21.228810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.228860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.001 [2024-12-04 14:26:21.228867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:20.001 [2024-12-04 14:26:21.228873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:20.001 [2024-12-04 14:26:21.228879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.001 [2024-12-04 14:26:21.229583] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 178.120 ms, result 0 00:25:20.001 [2024-12-04 14:26:21.245018] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.001 [2024-12-04 14:26:21.261028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:20.001 [2024-12-04 14:26:21.269137] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:20.260 14:26:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.260 14:26:21 -- common/autotest_common.sh@862 -- # return 0 00:25:20.260 14:26:21 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:20.260 14:26:21 -- ftl/common.sh@95 -- # return 0 00:25:20.260 14:26:21 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:20.519 [2024-12-04 14:26:21.746099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.519 [2024-12-04 14:26:21.746132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:20.519 [2024-12-04 14:26:21.746142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:20.519 [2024-12-04 14:26:21.746148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.519 [2024-12-04 14:26:21.746167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.519 [2024-12-04 14:26:21.746174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:20.519 [2024-12-04 14:26:21.746182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:20.519 [2024-12-04 14:26:21.746187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.519 [2024-12-04 14:26:21.746203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:20.519 [2024-12-04 14:26:21.746210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:20.519 [2024-12-04 14:26:21.746215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:20.519 [2024-12-04 14:26:21.746221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:20.519 [2024-12-04 14:26:21.746266] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.175 ms, result 0 00:25:20.519 true 00:25:20.519 14:26:21 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:20.519 { 00:25:20.519 "name": "ftl", 00:25:20.519 "properties": [ 00:25:20.519 { 00:25:20.519 "name": "superblock_version", 00:25:20.519 "value": 5, 00:25:20.519 "read-only": true 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "name": "base_device", 00:25:20.519 "bands": [ 00:25:20.519 { 00:25:20.519 "id": 0, 00:25:20.519 "state": "CLOSED", 00:25:20.519 "validity": 1.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 1, 00:25:20.519 "state": "CLOSED", 00:25:20.519 "validity": 1.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 2, 00:25:20.519 "state": "CLOSED", 00:25:20.519 "validity": 0.007843137254901933 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 3, 00:25:20.519 "state": "FREE", 00:25:20.519 "validity": 0.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 4, 00:25:20.519 "state": "FREE", 00:25:20.519 "validity": 0.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 5, 00:25:20.519 "state": "FREE", 00:25:20.519 "validity": 0.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 6, 00:25:20.519 "state": "FREE", 00:25:20.519 "validity": 0.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 7, 00:25:20.519 "state": "FREE", 00:25:20.519 "validity": 0.0 00:25:20.519 }, 00:25:20.519 { 00:25:20.519 "id": 8, 00:25:20.519 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 9, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 10, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 11, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 12, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 13, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 14, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 15, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 16, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 17, 00:25:20.520 "state": "FREE", 00:25:20.520 "validity": 0.0 00:25:20.520 } 00:25:20.520 ], 00:25:20.520 "read-only": true 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "name": "cache_device", 00:25:20.520 "type": "bdev", 00:25:20.520 "chunks": [ 00:25:20.520 { 00:25:20.520 "id": 0, 00:25:20.520 "state": "OPEN", 00:25:20.520 "utilization": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 1, 00:25:20.520 "state": "OPEN", 00:25:20.520 "utilization": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 2, 00:25:20.520 "state": "FREE", 00:25:20.520 "utilization": 0.0 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "id": 3, 00:25:20.520 "state": "FREE", 00:25:20.520 "utilization": 0.0 00:25:20.520 } 00:25:20.520 ], 00:25:20.520 "read-only": true 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "name": "verbose_mode", 00:25:20.520 "value": true, 00:25:20.520 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:20.520 }, 00:25:20.520 { 00:25:20.520 "name": "prep_upgrade_on_shutdown", 00:25:20.520 "value": false, 00:25:20.520 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:20.520 } 00:25:20.520 ] 00:25:20.520 } 00:25:20.520 14:26:21 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:25:20.520 14:26:21 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:20.520 14:26:21 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:20.779 14:26:22 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:25:20.779 14:26:22 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:25:20.779 14:26:22 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:25:20.779 14:26:22 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:25:20.779 14:26:22 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:21.038 Validate MD5 checksum, iteration 1 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:21.038 14:26:22 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:21.038 14:26:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:21.038 14:26:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:21.038 14:26:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:21.038 14:26:22 -- ftl/common.sh@154 -- # return 0 00:25:21.038 14:26:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:21.038 [2024-12-04 14:26:22.410208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:21.038 [2024-12-04 14:26:22.410637] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78495 ] 00:25:21.298 [2024-12-04 14:26:22.558779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.298 [2024-12-04 14:26:22.730408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.205  [2024-12-04T14:26:24.932Z] Copying: 714/1024 [MB] (714 MBps) [2024-12-04T14:26:26.893Z] Copying: 1024/1024 [MB] (average 698 MBps) 00:25:25.428 00:25:25.428 14:26:26 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:25.428 14:26:26 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:27.958 Validate MD5 checksum, iteration 2 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@103 -- # sum=cd3cf335b036c61fbc5fa4e2ce08f8d9 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@105 -- # [[ cd3cf335b036c61fbc5fa4e2ce08f8d9 != \c\d\3\c\f\3\3\5\b\0\3\6\c\6\1\f\b\c\5\f\a\4\e\2\c\e\0\8\f\8\d\9 ]] 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:27.958 14:26:28 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:27.958 14:26:28 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:27.958 14:26:28 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:27.958 14:26:28 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:27.958 14:26:28 -- ftl/common.sh@154 -- # return 0 00:25:27.958 14:26:28 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:27.958 [2024-12-04 14:26:28.886475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:27.958 [2024-12-04 14:26:28.886555] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78568 ] 00:25:27.958 [2024-12-04 14:26:29.028700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.958 [2024-12-04 14:26:29.177820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.335  [2024-12-04T14:26:31.062Z] Copying: 749/1024 [MB] (749 MBps) [2024-12-04T14:26:32.976Z] Copying: 1024/1024 [MB] (average 733 MBps) 00:25:31.511 00:25:31.511 14:26:32 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:31.511 14:26:32 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@103 -- # sum=e989ad3e27f9f4b8aa999b6c2709f9d1 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@105 -- # [[ e989ad3e27f9f4b8aa999b6c2709f9d1 != \e\9\8\9\a\d\3\e\2\7\f\9\f\4\b\8\a\a\9\9\9\b\6\c\2\7\0\9\f\9\d\1 ]] 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:25:33.415 14:26:34 -- ftl/common.sh@137 -- # [[ -n 78456 ]] 00:25:33.415 14:26:34 -- ftl/common.sh@138 -- # kill -9 78456 00:25:33.415 14:26:34 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:25:33.415 14:26:34 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:25:33.415 14:26:34 -- ftl/common.sh@81 -- # local base_bdev= 00:25:33.415 14:26:34 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:33.415 14:26:34 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:33.415 14:26:34 -- ftl/common.sh@89 -- # spdk_tgt_pid=78630 00:25:33.415 14:26:34 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:33.415 14:26:34 -- ftl/common.sh@91 -- # waitforlisten 78630 00:25:33.415 14:26:34 -- common/autotest_common.sh@829 -- # '[' -z 78630 ']' 00:25:33.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.415 14:26:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.415 14:26:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.416 14:26:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.416 14:26:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.416 14:26:34 -- common/autotest_common.sh@10 -- # set +x 00:25:33.416 14:26:34 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:33.416 [2024-12-04 14:26:34.528538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:33.416 [2024-12-04 14:26:34.528648] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78630 ] 00:25:33.416 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 78456 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:25:33.416 [2024-12-04 14:26:34.678752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.416 [2024-12-04 14:26:34.853059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:33.416 [2024-12-04 14:26:34.853259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.357 [2024-12-04 14:26:35.488569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:34.357 [2024-12-04 14:26:35.488625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:34.357 [2024-12-04 14:26:35.629592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.629627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:34.357 [2024-12-04 14:26:35.629640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:34.357 [2024-12-04 14:26:35.629648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.629698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.629710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:34.357 [2024-12-04 14:26:35.629718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:34.357 [2024-12-04 14:26:35.629725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.629743] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:34.357 [2024-12-04 14:26:35.630471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:34.357 [2024-12-04 14:26:35.630492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.630500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:34.357 [2024-12-04 14:26:35.630508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.753 ms 00:25:34.357 [2024-12-04 14:26:35.630515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.630785] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:34.357 [2024-12-04 14:26:35.647158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.647182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:34.357 [2024-12-04 14:26:35.647193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.373 ms 00:25:34.357 [2024-12-04 14:26:35.647201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.656124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.656150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:34.357 [2024-12-04 14:26:35.656159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:34.357 [2024-12-04 14:26:35.656166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.656463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.656473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:34.357 [2024-12-04 14:26:35.656481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:25:34.357 [2024-12-04 14:26:35.656488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.656520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.656527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:34.357 [2024-12-04 14:26:35.656534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:34.357 [2024-12-04 14:26:35.656544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.656567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.656574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:34.357 [2024-12-04 14:26:35.656581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:34.357 [2024-12-04 14:26:35.656588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.656611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:34.357 [2024-12-04 14:26:35.659603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.659625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:34.357 [2024-12-04 14:26:35.659633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.000 ms 00:25:34.357 [2024-12-04 14:26:35.659640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.659671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.357 [2024-12-04 14:26:35.659678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:34.357 [2024-12-04 14:26:35.659687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:34.357 [2024-12-04 14:26:35.659694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.357 [2024-12-04 14:26:35.659715] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:34.357 [2024-12-04 14:26:35.659731] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:25:34.357 [2024-12-04 14:26:35.659761] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:34.357 [2024-12-04 14:26:35.659774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:25:34.357 [2024-12-04 14:26:35.659844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:34.357 [2024-12-04 14:26:35.659856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:34.357 [2024-12-04 14:26:35.659867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:34.357 [2024-12-04 14:26:35.659876] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:34.357 [2024-12-04 14:26:35.659884] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:34.357 [2024-12-04 14:26:35.659891] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:34.357 [2024-12-04 14:26:35.659898] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:34.357 [2024-12-04 14:26:35.659905] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:34.357 [2024-12-04 14:26:35.659911] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:34.358 [2024-12-04 14:26:35.659918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.659924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:34.358 [2024-12-04 14:26:35.659931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:25:34.358 [2024-12-04 14:26:35.659940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.659999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.660006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:34.358 [2024-12-04 14:26:35.660013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:25:34.358 [2024-12-04 14:26:35.660019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.660101] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:34.358 [2024-12-04 14:26:35.660111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:34.358 [2024-12-04 14:26:35.660118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:34.358 [2024-12-04 14:26:35.660147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:34.358 [2024-12-04 14:26:35.660161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:34.358 [2024-12-04 14:26:35.660167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:34.358 [2024-12-04 14:26:35.660173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:34.358 [2024-12-04 14:26:35.660186] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:34.358 [2024-12-04 14:26:35.660192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:34.358 [2024-12-04 14:26:35.660205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660217] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:34.358 [2024-12-04 14:26:35.660224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:34.358 [2024-12-04 14:26:35.660231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:34.358 [2024-12-04 14:26:35.660243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:34.358 [2024-12-04 14:26:35.660249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:34.358 [2024-12-04 14:26:35.660261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:34.358 [2024-12-04 14:26:35.660267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:34.358 [2024-12-04 14:26:35.660280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:34.358 [2024-12-04 14:26:35.660286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:34.358 [2024-12-04 14:26:35.660298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:34.358 [2024-12-04 14:26:35.660304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:34.358 [2024-12-04 14:26:35.660316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:34.358 [2024-12-04 14:26:35.660322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660328] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:34.358 [2024-12-04 14:26:35.660334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:34.358 [2024-12-04 14:26:35.660344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:34.358 [2024-12-04 14:26:35.660356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660368] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:34.358 [2024-12-04 14:26:35.660375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:34.358 [2024-12-04 14:26:35.660382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.358 [2024-12-04 14:26:35.660396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:34.358 [2024-12-04 14:26:35.660402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:34.358 [2024-12-04 14:26:35.660408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:34.358 [2024-12-04 14:26:35.660415] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:34.358 [2024-12-04 14:26:35.660421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:34.358 [2024-12-04 14:26:35.660427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:34.358 [2024-12-04 14:26:35.660434] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:34.358 [2024-12-04 14:26:35.660442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:34.358 [2024-12-04 14:26:35.660458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:34.358 [2024-12-04 14:26:35.660483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:34.358 [2024-12-04 14:26:35.660490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:34.358 [2024-12-04 14:26:35.660497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:34.358 [2024-12-04 14:26:35.660503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660523] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:34.358 [2024-12-04 14:26:35.660538] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:34.358 [2024-12-04 14:26:35.660545] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:34.358 [2024-12-04 14:26:35.660552] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660560] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.358 [2024-12-04 14:26:35.660570] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:34.358 [2024-12-04 14:26:35.660577] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:34.358 [2024-12-04 14:26:35.660584] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:34.358 [2024-12-04 14:26:35.660591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.660598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:34.358 [2024-12-04 14:26:35.660605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:25:34.358 [2024-12-04 14:26:35.660616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.673742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.673766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:34.358 [2024-12-04 14:26:35.673779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.077 ms 00:25:34.358 [2024-12-04 14:26:35.673786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.673819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.673827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:34.358 [2024-12-04 14:26:35.673834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:34.358 [2024-12-04 14:26:35.673841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.703875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.703900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:34.358 [2024-12-04 14:26:35.703910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.993 ms 00:25:34.358 [2024-12-04 14:26:35.703916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.703941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.703949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:34.358 [2024-12-04 14:26:35.703956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:34.358 [2024-12-04 14:26:35.703963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.358 [2024-12-04 14:26:35.704040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.358 [2024-12-04 14:26:35.704049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:34.358 [2024-12-04 14:26:35.704056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:25:34.359 [2024-12-04 14:26:35.704063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.704107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.704117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:34.359 [2024-12-04 14:26:35.704124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:34.359 [2024-12-04 14:26:35.704131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.718614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.718641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:34.359 [2024-12-04 14:26:35.718652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.462 ms 00:25:34.359 [2024-12-04 14:26:35.718660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.718744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.718755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:34.359 [2024-12-04 14:26:35.718764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:34.359 [2024-12-04 14:26:35.718772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.735354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.735381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:34.359 [2024-12-04 14:26:35.735391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.564 ms 00:25:34.359 [2024-12-04 14:26:35.735398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.744363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.744387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:34.359 [2024-12-04 14:26:35.744396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:25:34.359 [2024-12-04 14:26:35.744403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.801258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.801294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:34.359 [2024-12-04 14:26:35.801306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 56.810 ms 00:25:34.359 [2024-12-04 14:26:35.801314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.801400] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:34.359 [2024-12-04 14:26:35.801442] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:34.359 [2024-12-04 14:26:35.801480] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:34.359 [2024-12-04 14:26:35.801518] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:34.359 [2024-12-04 14:26:35.801525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.801532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:34.359 [2024-12-04 14:26:35.801543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:25:34.359 [2024-12-04 14:26:35.801552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.801597] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:34.359 [2024-12-04 14:26:35.801607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.801615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:34.359 [2024-12-04 14:26:35.801623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:34.359 [2024-12-04 14:26:35.801629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.359 [2024-12-04 14:26:35.816825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.359 [2024-12-04 14:26:35.816852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:34.359 [2024-12-04 14:26:35.816862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.176 ms 00:25:34.359 [2024-12-04 14:26:35.816869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.618 [2024-12-04 14:26:35.825575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.618 [2024-12-04 14:26:35.825612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:34.618 [2024-12-04 14:26:35.825623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:34.618 [2024-12-04 14:26:35.825631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.618 [2024-12-04 14:26:35.825692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.618 [2024-12-04 14:26:35.825703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:25:34.618 [2024-12-04 14:26:35.825711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:34.618 [2024-12-04 14:26:35.825717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.618 [2024-12-04 14:26:35.825843] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:25:35.190 [2024-12-04 14:26:36.380779] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:25:35.190 [2024-12-04 14:26:36.380916] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:25:35.761 [2024-12-04 14:26:36.930024] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:25:35.761 [2024-12-04 14:26:36.930120] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.761 [2024-12-04 14:26:36.930134] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:35.761 [2024-12-04 14:26:36.930145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.761 [2024-12-04 14:26:36.930154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:25:35.761 [2024-12-04 14:26:36.930165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1104.405 ms 00:25:35.761 [2024-12-04 14:26:36.930173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.761 [2024-12-04 14:26:36.930212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.761 [2024-12-04 14:26:36.930220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:25:35.761 [2024-12-04 14:26:36.930229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:35.761 [2024-12-04 14:26:36.930236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.761 [2024-12-04 14:26:36.941207] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:35.761 [2024-12-04 14:26:36.941309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.761 [2024-12-04 14:26:36.941320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:35.761 [2024-12-04 14:26:36.941329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.054 ms 00:25:35.761 [2024-12-04 14:26:36.941336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.761 [2024-12-04 14:26:36.941992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.942004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:25:35.762 [2024-12-04 14:26:36.942012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.595 ms 00:25:35.762 [2024-12-04 14:26:36.942020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.944252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.944269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:25:35.762 [2024-12-04 14:26:36.944279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:25:35.762 [2024-12-04 14:26:36.944287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.968532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.968565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:25:35.762 [2024-12-04 14:26:36.968576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.222 ms 00:25:35.762 [2024-12-04 14:26:36.968583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.968672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.968683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:35.762 [2024-12-04 14:26:36.968691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:35.762 [2024-12-04 14:26:36.968702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.969875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.969905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:35.762 [2024-12-04 14:26:36.969913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.156 ms 00:25:35.762 [2024-12-04 14:26:36.969920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.969949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.969956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:35.762 [2024-12-04 14:26:36.969964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:35.762 [2024-12-04 14:26:36.969971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.970012] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:35.762 [2024-12-04 14:26:36.970021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.970029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:35.762 [2024-12-04 14:26:36.970038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:25:35.762 [2024-12-04 14:26:36.970045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.970113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:35.762 [2024-12-04 14:26:36.970123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:35.762 [2024-12-04 14:26:36.970131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:35.762 [2024-12-04 14:26:36.970137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:35.762 [2024-12-04 14:26:36.970959] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1340.965 ms, result 0 00:25:35.762 [2024-12-04 14:26:36.984856] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.762 [2024-12-04 14:26:37.000855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:35.762 [2024-12-04 14:26:37.008957] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:36.023 14:26:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.023 14:26:37 -- common/autotest_common.sh@862 -- # return 0 00:25:36.023 14:26:37 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:36.023 14:26:37 -- ftl/common.sh@95 -- # return 0 00:25:36.023 14:26:37 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:25:36.023 14:26:37 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:36.023 14:26:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:36.023 14:26:37 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:36.023 Validate MD5 checksum, iteration 1 00:25:36.023 14:26:37 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:36.023 14:26:37 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:36.023 14:26:37 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:36.023 14:26:37 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:36.023 14:26:37 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:36.023 14:26:37 -- ftl/common.sh@154 -- # return 0 00:25:36.023 14:26:37 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:36.023 [2024-12-04 14:26:37.406617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:36.023 [2024-12-04 14:26:37.406718] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78677 ] 00:25:36.283 [2024-12-04 14:26:37.553433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.283 [2024-12-04 14:26:37.691846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.660  [2024-12-04T14:26:39.693Z] Copying: 729/1024 [MB] (729 MBps) [2024-12-04T14:26:40.630Z] Copying: 1024/1024 [MB] (average 715 MBps) 00:25:39.165 00:25:39.165 14:26:40 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:39.165 14:26:40 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@103 -- # sum=cd3cf335b036c61fbc5fa4e2ce08f8d9 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@105 -- # [[ cd3cf335b036c61fbc5fa4e2ce08f8d9 != \c\d\3\c\f\3\3\5\b\0\3\6\c\6\1\f\b\c\5\f\a\4\e\2\c\e\0\8\f\8\d\9 ]] 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:41.709 Validate MD5 checksum, iteration 2 00:25:41.709 14:26:42 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:41.709 14:26:42 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:41.709 14:26:42 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:41.709 14:26:42 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:41.709 14:26:42 -- ftl/common.sh@154 -- # return 0 00:25:41.709 14:26:42 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:41.709 [2024-12-04 14:26:42.686933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.709 [2024-12-04 14:26:42.687031] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78733 ] 00:25:41.709 [2024-12-04 14:26:42.833383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.709 [2024-12-04 14:26:42.976081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.085  [2024-12-04T14:26:45.117Z] Copying: 709/1024 [MB] (709 MBps) [2024-12-04T14:26:46.080Z] Copying: 1024/1024 [MB] (average 704 MBps) 00:25:44.615 00:25:44.615 14:26:45 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:44.615 14:26:45 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@103 -- # sum=e989ad3e27f9f4b8aa999b6c2709f9d1 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@105 -- # [[ e989ad3e27f9f4b8aa999b6c2709f9d1 != \e\9\8\9\a\d\3\e\2\7\f\9\f\4\b\8\a\a\9\9\9\b\6\c\2\7\0\9\f\9\d\1 ]] 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:46.534 14:26:47 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:46.793 14:26:47 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:46.793 14:26:47 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:46.793 14:26:47 -- ftl/common.sh@130 -- # [[ -n 78630 ]] 00:25:46.793 14:26:47 -- ftl/common.sh@131 -- # killprocess 78630 00:25:46.793 14:26:47 -- common/autotest_common.sh@936 -- # '[' -z 78630 ']' 00:25:46.793 14:26:47 -- common/autotest_common.sh@940 -- # kill -0 78630 00:25:46.793 14:26:47 -- common/autotest_common.sh@941 -- # uname 00:25:46.793 14:26:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:46.793 14:26:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78630 00:25:46.793 14:26:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:46.793 14:26:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:46.793 killing process with pid 78630 00:25:46.793 14:26:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78630' 00:25:46.793 14:26:48 -- common/autotest_common.sh@955 -- # kill 78630 00:25:46.793 14:26:48 -- common/autotest_common.sh@960 -- # wait 78630 00:25:47.361 [2024-12-04 14:26:48.547609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:25:47.361 [2024-12-04 14:26:48.559374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.559405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:47.361 [2024-12-04 14:26:48.559416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:47.361 [2024-12-04 14:26:48.559422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.559438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:47.361 [2024-12-04 14:26:48.561504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.561528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:47.361 [2024-12-04 14:26:48.561536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.055 ms 00:25:47.361 [2024-12-04 14:26:48.561542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.561715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.561726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:47.361 [2024-12-04 14:26:48.561732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:25:47.361 [2024-12-04 14:26:48.561737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.562739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.562756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:47.361 [2024-12-04 14:26:48.562763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.990 ms 00:25:47.361 [2024-12-04 14:26:48.562769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.563626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.563644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:25:47.361 [2024-12-04 14:26:48.563651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:25:47.361 [2024-12-04 14:26:48.563658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.570889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.570913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:47.361 [2024-12-04 14:26:48.570920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.206 ms 00:25:47.361 [2024-12-04 14:26:48.570926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.575006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.575032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:47.361 [2024-12-04 14:26:48.575040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.056 ms 00:25:47.361 [2024-12-04 14:26:48.575046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.575121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.575129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:47.361 [2024-12-04 14:26:48.575136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:47.361 [2024-12-04 14:26:48.575142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.582222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.582250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:47.361 [2024-12-04 14:26:48.582257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.068 ms 00:25:47.361 [2024-12-04 14:26:48.582262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.589479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.589500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:47.361 [2024-12-04 14:26:48.589507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.193 ms 00:25:47.361 [2024-12-04 14:26:48.589512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.361 [2024-12-04 14:26:48.596678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.361 [2024-12-04 14:26:48.596698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:47.361 [2024-12-04 14:26:48.596705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.143 ms 00:25:47.361 [2024-12-04 14:26:48.596710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.603884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.362 [2024-12-04 14:26:48.603905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:47.362 [2024-12-04 14:26:48.603912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.131 ms 00:25:47.362 [2024-12-04 14:26:48.603917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.603940] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:47.362 [2024-12-04 14:26:48.603951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:47.362 [2024-12-04 14:26:48.603963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:47.362 [2024-12-04 14:26:48.603969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:47.362 [2024-12-04 14:26:48.603975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.603981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.603987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.603992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.603999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:47.362 [2024-12-04 14:26:48.604068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:47.362 [2024-12-04 14:26:48.604074] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9ce649b4-bcad-4cec-9c2b-adcbb00d6b24 00:25:47.362 [2024-12-04 14:26:48.604080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:47.362 [2024-12-04 14:26:48.604094] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:47.362 [2024-12-04 14:26:48.604100] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:47.362 [2024-12-04 14:26:48.604106] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:47.362 [2024-12-04 14:26:48.604111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:47.362 [2024-12-04 14:26:48.604117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:47.362 [2024-12-04 14:26:48.604123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:47.362 [2024-12-04 14:26:48.604128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:47.362 [2024-12-04 14:26:48.604133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:47.362 [2024-12-04 14:26:48.604138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.362 [2024-12-04 14:26:48.604144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:47.362 [2024-12-04 14:26:48.604150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:25:47.362 [2024-12-04 14:26:48.604160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.613809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.362 [2024-12-04 14:26:48.613829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:47.362 [2024-12-04 14:26:48.613837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.629 ms 00:25:47.362 [2024-12-04 14:26:48.613842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.613989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:47.362 [2024-12-04 14:26:48.613995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:47.362 [2024-12-04 14:26:48.614005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.133 ms 00:25:47.362 [2024-12-04 14:26:48.614010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.649177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.649200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:47.362 [2024-12-04 14:26:48.649209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.649215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.649237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.649243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:47.362 [2024-12-04 14:26:48.649252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.649258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.649300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.649307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:47.362 [2024-12-04 14:26:48.649313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.649319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.649331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.649337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:47.362 [2024-12-04 14:26:48.649343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.649351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.707802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.707829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:47.362 [2024-12-04 14:26:48.707838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.707845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.730868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.730891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:47.362 [2024-12-04 14:26:48.730902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.730908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.730950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.730956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:47.362 [2024-12-04 14:26:48.730962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.730968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.731000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.731006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:47.362 [2024-12-04 14:26:48.731012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.731018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.731083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.731101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:47.362 [2024-12-04 14:26:48.731107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.731112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.731135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.731141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:47.362 [2024-12-04 14:26:48.731147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.731152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.731181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.731188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:47.362 [2024-12-04 14:26:48.731194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.731199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.731229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:47.362 [2024-12-04 14:26:48.731236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:47.362 [2024-12-04 14:26:48.731242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:47.362 [2024-12-04 14:26:48.731247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:47.362 [2024-12-04 14:26:48.731336] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 171.942 ms, result 0 00:25:47.932 14:26:49 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:47.932 14:26:49 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.932 14:26:49 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:47.932 14:26:49 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:47.932 14:26:49 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:47.932 14:26:49 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.932 14:26:49 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:47.932 Remove shared memory files 00:25:47.932 14:26:49 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:47.932 14:26:49 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:47.932 14:26:49 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:47.932 14:26:49 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78456 00:25:47.932 14:26:49 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:47.932 14:26:49 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:47.932 00:25:47.932 real 1m15.194s 00:25:47.932 user 1m47.302s 00:25:47.932 sys 0m16.816s 00:25:47.932 14:26:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:47.932 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:25:47.932 ************************************ 00:25:47.932 END TEST ftl_upgrade_shutdown 00:25:47.932 ************************************ 00:25:48.191 14:26:49 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:25:48.191 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:25:48.191 14:26:49 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:25:48.191 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:25:48.191 14:26:49 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:25:48.191 14:26:49 -- ftl/ftl.sh@14 -- # killprocess 70298 00:25:48.191 14:26:49 -- common/autotest_common.sh@936 -- # '[' -z 70298 ']' 00:25:48.191 14:26:49 -- common/autotest_common.sh@940 -- # kill -0 70298 00:25:48.191 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70298) - No such process 00:25:48.191 Process with pid 70298 is not found 00:25:48.191 14:26:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70298 is not found' 00:25:48.191 14:26:49 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:25:48.191 14:26:49 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=78842 00:25:48.191 14:26:49 -- ftl/ftl.sh@20 -- # waitforlisten 78842 00:25:48.191 14:26:49 -- common/autotest_common.sh@829 -- # '[' -z 78842 ']' 00:25:48.191 14:26:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.191 14:26:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.191 14:26:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.191 14:26:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.191 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:25:48.191 14:26:49 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:48.191 [2024-12-04 14:26:49.488859] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:48.191 [2024-12-04 14:26:49.488966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78842 ] 00:25:48.191 [2024-12-04 14:26:49.636925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.449 [2024-12-04 14:26:49.776767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.449 [2024-12-04 14:26:49.776922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.016 14:26:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:49.016 14:26:50 -- common/autotest_common.sh@862 -- # return 0 00:25:49.016 14:26:50 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:25:49.276 nvme0n1 00:25:49.276 14:26:50 -- ftl/ftl.sh@22 -- # clear_lvols 00:25:49.276 14:26:50 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:49.276 14:26:50 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:49.276 14:26:50 -- ftl/common.sh@28 -- # stores=4c002460-e29a-42dc-a274-04d03b2a4a37 00:25:49.276 14:26:50 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:49.276 14:26:50 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4c002460-e29a-42dc-a274-04d03b2a4a37 00:25:49.535 14:26:50 -- ftl/ftl.sh@23 -- # killprocess 78842 00:25:49.535 14:26:50 -- common/autotest_common.sh@936 -- # '[' -z 78842 ']' 00:25:49.535 14:26:50 -- common/autotest_common.sh@940 -- # kill -0 78842 00:25:49.535 14:26:50 -- common/autotest_common.sh@941 -- # uname 00:25:49.535 14:26:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.535 14:26:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78842 00:25:49.535 14:26:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:49.535 14:26:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:49.535 killing process with pid 78842 00:25:49.535 14:26:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78842' 00:25:49.535 14:26:50 -- common/autotest_common.sh@955 -- # kill 78842 00:25:49.535 14:26:50 -- common/autotest_common.sh@960 -- # wait 78842 00:25:50.914 14:26:52 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:50.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.176 Waiting for block devices as requested 00:25:51.176 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.176 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.176 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:51.438 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.730 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:25:56.730 Remove shared memory files 00:25:56.730 14:26:57 -- ftl/ftl.sh@28 -- # remove_shm 00:25:56.730 14:26:57 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:56.730 14:26:57 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:56.730 14:26:57 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:56.730 14:26:57 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:56.730 14:26:57 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:56.730 14:26:57 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:56.730 00:25:56.730 real 12m13.353s 00:25:56.730 user 14m8.417s 00:25:56.730 sys 1m10.135s 00:25:56.730 14:26:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:56.730 ************************************ 00:25:56.730 END TEST ftl 00:25:56.730 ************************************ 00:25:56.730 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.730 14:26:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:56.730 14:26:57 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:25:56.730 14:26:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:56.730 14:26:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:56.730 14:26:57 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:25:56.730 14:26:57 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:25:56.730 14:26:57 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:25:56.730 14:26:57 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:25:56.730 14:26:57 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:25:56.730 14:26:57 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:25:56.730 14:26:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.730 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:25:56.730 14:26:57 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:25:56.730 14:26:57 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:25:56.730 14:26:57 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:25:56.730 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:25:57.674 INFO: APP EXITING 00:25:57.674 INFO: killing all VMs 00:25:57.674 INFO: killing vhost app 00:25:57.674 INFO: EXIT DONE 00:25:58.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:58.248 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:25:58.248 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:25:58.248 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:25:58.248 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:25:59.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.194 Cleaning 00:25:59.194 Removing: /var/run/dpdk/spdk0/config 00:25:59.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:59.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:59.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:59.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:59.194 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:59.194 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:59.194 Removing: /var/run/dpdk/spdk0 00:25:59.194 Removing: /var/run/dpdk/spdk_pid55964 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56171 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56465 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56563 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56653 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56752 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56848 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56882 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56924 00:25:59.194 Removing: /var/run/dpdk/spdk_pid56988 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57083 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57502 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57560 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57625 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57641 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57734 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57750 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57849 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57871 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57924 00:25:59.194 Removing: /var/run/dpdk/spdk_pid57951 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58004 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58016 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58172 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58214 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58296 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58374 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58399 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58472 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58491 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58528 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58554 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58595 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58621 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58662 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58688 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58729 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58750 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58791 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58811 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58852 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58878 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58919 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58944 00:25:59.194 Removing: /var/run/dpdk/spdk_pid58981 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59007 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59050 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59076 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59117 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59145 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59186 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59212 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59253 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59272 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59309 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59335 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59376 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59402 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59443 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59470 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59511 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59535 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59579 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59608 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59649 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59681 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59721 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59748 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59790 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59868 00:25:59.194 Removing: /var/run/dpdk/spdk_pid59980 00:25:59.194 Removing: /var/run/dpdk/spdk_pid60150 00:25:59.194 Removing: /var/run/dpdk/spdk_pid60223 00:25:59.194 Removing: /var/run/dpdk/spdk_pid60265 00:25:59.194 Removing: /var/run/dpdk/spdk_pid60676 00:25:59.194 Removing: /var/run/dpdk/spdk_pid60876 00:25:59.194 Removing: /var/run/dpdk/spdk_pid60991 00:25:59.194 Removing: /var/run/dpdk/spdk_pid61038 00:25:59.194 Removing: /var/run/dpdk/spdk_pid61064 00:25:59.194 Removing: /var/run/dpdk/spdk_pid61147 00:25:59.194 Removing: /var/run/dpdk/spdk_pid61800 00:25:59.194 Removing: /var/run/dpdk/spdk_pid61837 00:25:59.194 Removing: /var/run/dpdk/spdk_pid62311 00:25:59.194 Removing: /var/run/dpdk/spdk_pid62424 00:25:59.194 Removing: /var/run/dpdk/spdk_pid62539 00:25:59.194 Removing: /var/run/dpdk/spdk_pid62581 00:25:59.194 Removing: /var/run/dpdk/spdk_pid62612 00:25:59.194 Removing: /var/run/dpdk/spdk_pid62643 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64576 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64715 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64725 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64737 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64784 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64788 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64800 00:25:59.194 Removing: /var/run/dpdk/spdk_pid64860 00:25:59.456 Removing: /var/run/dpdk/spdk_pid64864 00:25:59.456 Removing: /var/run/dpdk/spdk_pid64881 00:25:59.456 Removing: /var/run/dpdk/spdk_pid64917 00:25:59.456 Removing: /var/run/dpdk/spdk_pid64924 00:25:59.456 Removing: /var/run/dpdk/spdk_pid64942 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66380 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66489 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66609 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66700 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66776 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66854 00:25:59.456 Removing: /var/run/dpdk/spdk_pid66954 00:25:59.456 Removing: /var/run/dpdk/spdk_pid67034 00:25:59.456 Removing: /var/run/dpdk/spdk_pid67174 00:25:59.456 Removing: /var/run/dpdk/spdk_pid67559 00:25:59.456 Removing: /var/run/dpdk/spdk_pid67596 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68026 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68208 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68316 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68415 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68468 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68498 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68801 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68864 00:25:59.456 Removing: /var/run/dpdk/spdk_pid68938 00:25:59.456 Removing: /var/run/dpdk/spdk_pid69329 00:25:59.456 Removing: /var/run/dpdk/spdk_pid69481 00:25:59.456 Removing: /var/run/dpdk/spdk_pid70298 00:25:59.456 Removing: /var/run/dpdk/spdk_pid70429 00:25:59.456 Removing: /var/run/dpdk/spdk_pid70618 00:25:59.456 Removing: /var/run/dpdk/spdk_pid70714 00:25:59.456 Removing: /var/run/dpdk/spdk_pid70990 00:25:59.456 Removing: /var/run/dpdk/spdk_pid71239 00:25:59.456 Removing: /var/run/dpdk/spdk_pid71643 00:25:59.456 Removing: /var/run/dpdk/spdk_pid71828 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72010 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72062 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72203 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72233 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72288 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72518 00:25:59.456 Removing: /var/run/dpdk/spdk_pid72778 00:25:59.456 Removing: /var/run/dpdk/spdk_pid73278 00:25:59.456 Removing: /var/run/dpdk/spdk_pid74011 00:25:59.456 Removing: /var/run/dpdk/spdk_pid74628 00:25:59.456 Removing: /var/run/dpdk/spdk_pid75408 00:25:59.456 Removing: /var/run/dpdk/spdk_pid75552 00:25:59.456 Removing: /var/run/dpdk/spdk_pid75636 00:25:59.456 Removing: /var/run/dpdk/spdk_pid75995 00:25:59.456 Removing: /var/run/dpdk/spdk_pid76054 00:25:59.456 Removing: /var/run/dpdk/spdk_pid76547 00:25:59.456 Removing: /var/run/dpdk/spdk_pid77052 00:25:59.456 Removing: /var/run/dpdk/spdk_pid77890 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78015 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78064 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78121 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78180 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78233 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78456 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78495 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78568 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78630 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78677 00:25:59.456 Removing: /var/run/dpdk/spdk_pid78733 00:25:59.457 Removing: /var/run/dpdk/spdk_pid78842 00:25:59.457 Clean 00:25:59.457 killing process with pid 48166 00:25:59.457 killing process with pid 48172 00:25:59.717 14:27:00 -- common/autotest_common.sh@1446 -- # return 0 00:25:59.717 14:27:00 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:25:59.717 14:27:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.717 14:27:00 -- common/autotest_common.sh@10 -- # set +x 00:25:59.717 14:27:00 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:25:59.717 14:27:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.717 14:27:00 -- common/autotest_common.sh@10 -- # set +x 00:25:59.717 14:27:01 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:59.717 14:27:01 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:59.717 14:27:01 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:59.717 14:27:01 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:25:59.717 14:27:01 -- spdk/autotest.sh@383 -- # hostname 00:25:59.717 14:27:01 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:59.717 geninfo: WARNING: invalid characters removed from testname! 00:26:26.309 14:27:23 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:26.309 14:27:27 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:28.308 14:27:29 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:30.226 14:27:31 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:32.129 14:27:33 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:34.040 14:27:35 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:36.588 14:27:37 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:36.588 14:27:37 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:26:36.588 14:27:37 -- common/autotest_common.sh@1690 -- $ lcov --version 00:26:36.588 14:27:37 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:26:36.588 14:27:37 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:26:36.588 14:27:37 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:26:36.588 14:27:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:26:36.588 14:27:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:26:36.588 14:27:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:26:36.588 14:27:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:26:36.588 14:27:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:26:36.588 14:27:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:26:36.588 14:27:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:26:36.588 14:27:37 -- scripts/common.sh@339 -- $ ver1_l=2 00:26:36.588 14:27:37 -- scripts/common.sh@340 -- $ ver2_l=1 00:26:36.588 14:27:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:26:36.588 14:27:37 -- scripts/common.sh@343 -- $ case "$op" in 00:26:36.588 14:27:37 -- scripts/common.sh@344 -- $ : 1 00:26:36.588 14:27:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:26:36.588 14:27:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.588 14:27:37 -- scripts/common.sh@364 -- $ decimal 1 00:26:36.588 14:27:37 -- scripts/common.sh@352 -- $ local d=1 00:26:36.588 14:27:37 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:26:36.588 14:27:37 -- scripts/common.sh@354 -- $ echo 1 00:26:36.588 14:27:37 -- scripts/common.sh@364 -- $ ver1[v]=1 00:26:36.588 14:27:37 -- scripts/common.sh@365 -- $ decimal 2 00:26:36.588 14:27:37 -- scripts/common.sh@352 -- $ local d=2 00:26:36.588 14:27:37 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:26:36.588 14:27:37 -- scripts/common.sh@354 -- $ echo 2 00:26:36.588 14:27:37 -- scripts/common.sh@365 -- $ ver2[v]=2 00:26:36.588 14:27:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:26:36.588 14:27:37 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:26:36.588 14:27:37 -- scripts/common.sh@367 -- $ return 0 00:26:36.588 14:27:37 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.588 14:27:37 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:26:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.588 --rc genhtml_branch_coverage=1 00:26:36.588 --rc genhtml_function_coverage=1 00:26:36.588 --rc genhtml_legend=1 00:26:36.588 --rc geninfo_all_blocks=1 00:26:36.588 --rc geninfo_unexecuted_blocks=1 00:26:36.588 00:26:36.588 ' 00:26:36.588 14:27:37 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:26:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.588 --rc genhtml_branch_coverage=1 00:26:36.588 --rc genhtml_function_coverage=1 00:26:36.588 --rc genhtml_legend=1 00:26:36.588 --rc geninfo_all_blocks=1 00:26:36.588 --rc geninfo_unexecuted_blocks=1 00:26:36.588 00:26:36.588 ' 00:26:36.588 14:27:37 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:26:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.588 --rc genhtml_branch_coverage=1 00:26:36.588 --rc genhtml_function_coverage=1 00:26:36.588 --rc genhtml_legend=1 00:26:36.588 --rc geninfo_all_blocks=1 00:26:36.588 --rc geninfo_unexecuted_blocks=1 00:26:36.588 00:26:36.588 ' 00:26:36.588 14:27:37 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:26:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.588 --rc genhtml_branch_coverage=1 00:26:36.588 --rc genhtml_function_coverage=1 00:26:36.588 --rc genhtml_legend=1 00:26:36.588 --rc geninfo_all_blocks=1 00:26:36.588 --rc geninfo_unexecuted_blocks=1 00:26:36.588 00:26:36.588 ' 00:26:36.588 14:27:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.588 14:27:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:36.588 14:27:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.588 14:27:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.588 14:27:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.588 14:27:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.588 14:27:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.588 14:27:37 -- paths/export.sh@5 -- $ export PATH 00:26:36.588 14:27:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.588 14:27:37 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:36.588 14:27:37 -- common/autobuild_common.sh@440 -- $ date +%s 00:26:36.588 14:27:37 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733322457.XXXXXX 00:26:36.588 14:27:37 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733322457.Gornpd 00:26:36.588 14:27:37 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:26:36.589 14:27:37 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:26:36.589 14:27:37 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:36.589 14:27:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:36.589 14:27:37 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:36.589 14:27:37 -- common/autobuild_common.sh@456 -- $ get_config_params 00:26:36.589 14:27:37 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:26:36.589 14:27:37 -- common/autotest_common.sh@10 -- $ set +x 00:26:36.589 14:27:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:26:36.589 14:27:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:36.589 14:27:37 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:36.589 14:27:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:36.589 14:27:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:26:36.589 14:27:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:36.589 14:27:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:36.589 14:27:37 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:36.589 14:27:37 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:36.589 14:27:37 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:36.589 14:27:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:36.589 + [[ -n 4986 ]] 00:26:36.589 + sudo kill 4986 00:26:36.600 [Pipeline] } 00:26:36.617 [Pipeline] // timeout 00:26:36.623 [Pipeline] } 00:26:36.637 [Pipeline] // stage 00:26:36.642 [Pipeline] } 00:26:36.657 [Pipeline] // catchError 00:26:36.667 [Pipeline] stage 00:26:36.669 [Pipeline] { (Stop VM) 00:26:36.682 [Pipeline] sh 00:26:36.969 + vagrant halt 00:26:39.515 ==> default: Halting domain... 00:26:43.765 [Pipeline] sh 00:26:44.042 + vagrant destroy -f 00:26:46.587 ==> default: Removing domain... 00:26:47.174 [Pipeline] sh 00:26:47.540 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:26:47.551 [Pipeline] } 00:26:47.566 [Pipeline] // stage 00:26:47.572 [Pipeline] } 00:26:47.587 [Pipeline] // dir 00:26:47.592 [Pipeline] } 00:26:47.608 [Pipeline] // wrap 00:26:47.616 [Pipeline] } 00:26:47.629 [Pipeline] // catchError 00:26:47.640 [Pipeline] stage 00:26:47.642 [Pipeline] { (Epilogue) 00:26:47.656 [Pipeline] sh 00:26:47.943 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:52.156 [Pipeline] catchError 00:26:52.159 [Pipeline] { 00:26:52.173 [Pipeline] sh 00:26:52.459 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:52.459 Artifacts sizes are good 00:26:52.471 [Pipeline] } 00:26:52.483 [Pipeline] // catchError 00:26:52.494 [Pipeline] archiveArtifacts 00:26:52.502 Archiving artifacts 00:26:52.594 [Pipeline] cleanWs 00:26:52.608 [WS-CLEANUP] Deleting project workspace... 00:26:52.608 [WS-CLEANUP] Deferred wipeout is used... 00:26:52.615 [WS-CLEANUP] done 00:26:52.617 [Pipeline] } 00:26:52.634 [Pipeline] // stage 00:26:52.641 [Pipeline] } 00:26:52.655 [Pipeline] // node 00:26:52.661 [Pipeline] End of Pipeline 00:26:52.709 Finished: SUCCESS